Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
25 - 37 Lacs
bengaluru
Work from Office
Skills Required: Familiarity with data processing engines such as Apache Spark, Flink, or other big data tools. Design, develop, and implement robust data lake architectures on cloud platforms (AWS/Azure). Implement streaming and batch data pipelines using Apache Hudi, Apache Hive, and cloud-native services like AWS Glue, Azure Data Lake, etc. Architect and optimize ingestion, compaction, partitioning, and indexing strategies in Apache Hudi. Develop scalable data transformation and ETL frameworks using Python, Spark, and Flink. Work closely with DataOps/DevOps to build CI/CD pipelines and monitoring tools for data lake platforms. Ensure data governance, schema evolution handling, lineage tracking, and compliance. Sound knowledge of Hive, Parquet/ORC formats, and DeltaLake vs Hudi vs Iceberg Strong understanding of schema evolution, data versioning, and ACID guarantees in data lakes Collaborate with analytics and BI teams to deliver clean, reliable, and timely datasets. Troubleshoot performance bottlenecks in big data processing workloads and pipelines. Experience with data governance tools and practices, including data cataloging, data lineage, and metadata management. Strong understanding of data integration and movement between different storage systems (databases, data lakes, data warehouses). Strong understanding of API integration for data ingestion, including RESTful services and streaming data. Experience in data migration strategies, tools, and frameworks for moving data from legacy systems (on-premises) to cloud-based solutions. Proficiency with data warehousing solutions (e.g., Google BigQuery, Snowflake). Expertise in data modeling tools and techniques (e.g., SAP Datasphere, EA Sparx). Strong knowledge of SQL and NoSQL databases (e.g., MongoDB, Cassandra). Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Nice To Have Experience with Apache Iceberg, Delta Lake Familiarity with Kinesis, Kafka, or any streaming platform Exposure to dbt, Airflow, or Dagster Experience in data cataloging, data governance tools, and column-level lineage tracking
Posted 1 week ago
6.0 - 10.0 years
3 - 12 Lacs
hyderabad, telangana, india
On-site
Support DataOps and SRE operations, assisting in offshore delivery of DataOps, BIOps, Data IntegrationOps, and related initiatives. Assist in managing day-to-day DataOps activities, including incident resolution, SLA adherence, and stakeholder engagement. Assist in implementing governance frameworks, tracking KPIs, and ensuring adherence to operational SLAs. Contribute to process standardization and automation efforts, improving service efficiency and scalability. Promote a customer-centric approach, ensuring high service quality and proactive issue resolution. Collaborate with onshore teams and business stakeholders, ensuring alignment of offshore activities with business needs. Monitor and optimize resource utilization, leveraging automation and analytics to improve productivity. Support continuous improvement efforts, identifying operational risks and ensuring compliance with security and governance policies. Participate in Agile work intake and management processes, contributing to strategic execution within data platform teams. Provide operational support for cloud infrastructure and data services, ensuring high availability and performance. Document and enhance operational policies and crisis management functions, supporting rapid incident response. Promote a customer-centric approach, ensuring high service quality and proactive issue resolution. Assist in team development efforts, fostering a collaborative and agile work environment. Adapt to changing priorities, supporting teams in maintaining focus on key deliverables. Qualifications 6+ years of technology experience in a global organization, preferably in the CPG industry. 4+ years of experience in Data & Analytics, with a foundational understanding of data engineering, data management, and operations. 3+ years of cross-functional IT experience, working with diverse teams and stakeholders. 12 years of leadership or coordination experience, supporting team operations and service delivery. Strong communication and collaboration skills, with the ability to convey technical concepts to non-technical audiences. Customer-focused mindset, ensuring high-quality service and responsiveness to business needs. Experience in supporting technical operations for enterprise data platforms, preferably in a Microsoft Azure environment. Basic understanding of Site Reliability Engineering (SRE) practices, including incident response, monitoring, and automation. Ability to drive operational stability, supporting proactive issue resolution and performance optimization. Strong analytical and problem-solving skills, with a continuous improvement mindset. Experience working in large-scale, data-driven environments, ensuring smooth operations of business-critical solutions. Ability to support governance and compliance initiatives, ensuring adherence to data standards and best practices. Familiarity with data acquisition, cataloging, and data management tools. Strong organizational skills, with the ability to manage multiple priorities effectively.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
hyderabad, telangana, india
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren&apost words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You&aposll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . At Bristol Myers Squibb, we are inspired by a single vision - transforming patients' lives through science. In oncology, hematology, immunology, and cardiovascular disease - and one of the most diverse and promising pipelines in the industry - each of our passionate colleagues contribute to innovations that drive meaningful change. We bring a human touch to every treatment we pioneer. Join us and make a difference. Position Summary At BMS, digital innovation and Information Technology are central to our vision of transforming patients' lives through science. To accelerate our ability to serve patients around the world, we must unleash the power of technology. We are committed to being at the forefront of transforming the way medicine is made and delivered by harnessing the power of computer and data science, artificial intelligence, and other technologies to promote scientific discovery, faster decision making, and enhanced patient care. If you want an exciting and rewarding career that is meaningful, consider joining our diverse team! As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities across different IT functional domains. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the ETL pipelines, data products, evolution of the data products, and utilize the most suitable data architecture required for our organization&aposs data needs. Responsible for delivering high quality, data products and analytic ready data solution Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data. Stay up-to-date with emerging trends and technologies in data engineering and analytics. Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Serves as the Subject Matter Expert on Data & Analytics Solutions. Knowledgeable in evolving trends in Data platforms and Product based implementation. Has end-to-end ownership mindset in driving initiatives through completion. Comfortable working in a fast-paced environment with minimal oversight. Mentors other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Qualifications & Experience 5+ years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is needed. In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus 5+ years of experience in data engineering or software development Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. Why You Should Apply Around the world, we are passionate about making an impact on the lives of patients with serious diseases. Empowered to apply our individual talents and diverse perspectives in an inclusive culture, our shared values of passion, innovation, urgency, accountability, inclusion, and integrity bring out the highest potential of each of our colleagues. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Our company is committed to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application, or any part of the recruitment process direct your inquiries to [HIDDEN TEXT]. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. If you come across a role that intrigues you but doesn&apost perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to [HIDDEN TEXT] . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Power BI Architect - Data Engineer, you will play a crucial role in designing, implementing, and managing comprehensive business intelligence solutions. Your focus will be on data modeling, report development, and ensuring data security and compliance. Working within high-performing and collaborative teams, you will present data migration solutions and influence key stakeholders in client groups. Your expertise will assist clients in driving towards strategic data architecture goals by enhancing the coherence, quality, security, and availability of the organization's data assets through the development of data migration roadmaps. Your responsibilities will include designing and leading real-time data architectures for large volumes of information, implementing integration flows with Data Lakes and Microsoft Fabric, optimizing and governing tabular models in Power BI, and ensuring high availability, security, and scalability. You will also coordinate data quality standards with a focus on DataOps for continuous deployments and automation. To be successful in this role, you should have demonstrable experience in Master data management and at least 7 years of experience in designing and implementing BI solutions and data architectures. You must possess advanced modeling skills, proficiency in DAX, and expertise in optimization and governance. Strong knowledge and mastery of Data Lake, Microsoft Fabric, and real-time ingestion methods are essential. Hands-on experience and knowledge of Python or R for data manipulation/transformation and automation are also required. Additionally, you should have proven experience in tabular modeling, DAX queries, and report optimization in Power BI. Your ability to plan, define, estimate, and manage the delivery of work packages using your experience will be crucial. Excellent communication skills and flexibility to respond to various program demands are essential for this role. You should have a deep understanding of key technical developments in your area of expertise and be able to lead the definition of information and data models, data governance structures, and processes. Experience in working in complex environments across multiple business and technology domains is preferred, along with the ability to bridge the gap between functional and non-functional teams.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Data Engineer at our company, you will play a crucial role in designing, developing, and maintaining data pipeline architecture to ensure the efficient flow of data across the organization. Your expertise in AWS Glue, Apache Airflow, Kafka, SQL, Python, and DataOps tools will be essential in integrating various data sources and optimizing data pipelines for performance and reliability. Your responsibilities will include developing robust and scalable data pipelines, integrating data sources like SAP HANA and SQL databases, and optimizing pipelines for efficiency. You will also be involved in designing data transformation processes, utilizing SQL and Python for ETL tasks, and ensuring data quality through rigorous testing. Collaboration and communication will be key aspects of your role as you work closely with data scientists, analysts, and stakeholders to understand data requirements and deliver solutions that meet their needs. You will collaborate with cross-functional teams to implement DataOps practices and improve data lifecycle management. Monitoring and optimization will be crucial as you monitor data pipeline performance, troubleshoot and resolve data-related issues, and implement monitoring systems to proactively address potential issues. You will also be responsible for maintaining comprehensive documentation of data pipelines, adhering to best practices in data engineering, and staying updated on industry trends. Required skills for this role include extensive experience with AWS Glue, proficiency in Apache Airflow, strong knowledge of Kafka, advanced SQL skills, proficiency in Python, and experience with SAP HANA. Knowledge of Snowflake, other AWS data services, and big data technologies is preferred. Soft skills such as strong analytical abilities, excellent communication, attention to detail, and the ability to work independently on multiple projects will also be valuable in this role.,
Posted 1 week ago
7.0 - 12.0 years
17 - 32 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Title: Senior Data Engineer Location: Pan India Experience: 7+ Years Joining: Immediate/Short Notice Preferred Job Summary: We are looking for an experienced Senior Data Engineer to design, develop, and optimize scalable data solutions across Enterprise Data Lake (EDL) and hybrid cloud platforms. The role involves data architecture, pipeline orchestration, metadata governance, and building reusable data products aligned with business goals. Key Responsibilities: Design & implement scalable data pipelines (Spark, Hive, Kafka, Bronze-Silver-Gold architecture). Work on data architecture, modelling, and orchestration for large-scale systems. Implement metadata governance, lineage, and business glossary using Apache Atlas. Support DataOps/MLOps best practices and mentor teams. Integrate data across structured & unstructured sources (ODS, CRM, NoSQL). Required Skills: Strong hands-on experience with Apache Hive, HBase, Kafka, Spark, Elasticsearch . Expertise in data architecture, modelling, orchestration, and DataOps . Familiarity with Data Mesh, Data Product development, and hybrid cloud (AWS/Azure/GCP) . Knowledge of metadata governance, ETL/ELT, NoSQL data models . Strong problem-solving and communication skills.
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
pune, maharashtra
On-site
At Medtronic, you can embark on a lifelong journey of exploration and innovation, while contributing to the advancement of healthcare access and equity for all. As an IT Director of Data Engineering, you will play a pivotal role in designing and implementing data solutions for the Diabetes operating unit. Your responsibilities will involve leading the development and maintenance of the enterprise data platform, overseeing data pipeline management, ensuring scalability and performance, integrating new technologies, collaborating with various teams, and managing projects and budgets effectively. Your role will require a master's degree in statistics, computer science, mathematics, data science, or related fields with significant experience in data management. You must have demonstrated expertise in AWS services, Databricks, big data technologies, ETL tools, programming languages such as Python, SQL, and Scala, data orchestration tools like Airflow, and data hygiene processes. Additionally, experience in managing teams, project delivery, and communication skills are essential for this position. As a leader in this role, you will be expected to inspire technical teams, deliver complex projects within deadlines and budgets, and effectively communicate technical concepts to non-technical stakeholders. Certifications such as AWS Certified Solutions Architect are advantageous. Your strategic thinking, problem-solving abilities, attention to detail, and adaptability to a fast-paced environment are key attributes that will contribute to your success in this role. Medtronic offers competitive salaries and a flexible benefits package that supports employees at every stage of their career and life. The company's commitment to its employees is evident in its values and recognition of their contributions to its success. Medtronic's mission of alleviating pain, restoring health, and extending life drives a global team of passionate individuals who work together to find innovative solutions to complex health challenges.,
Posted 2 weeks ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
As a Product Owner in DataOps Tools & Services at GSK, you will play a crucial role in driving efficiencies within the data product teams. Your primary responsibility will be to increase throughput and quality while ensuring the delivery of trustworthy data through repeatable and traceable processes. You will be accountable for defining and delivering the capability roadmap for one or more products aimed at facilitating the movement, processing, and protection of data in alignment with the FAIR principles within the Development Data Mesh. Your scope of work will encompass various aspects of DataOps Tools & Services, including data exchange, harmonization, observability/monitoring, protection, and anonymization capabilities. In collaboration with data product owners, engineers, and key individuals in data quality, governance, and analytics enablement teams, you will ensure the seamless integration of capabilities and leverage automation wherever feasible. You will work alongside a dedicated team of product designers, analysts, and engineers, applying agile delivery methodologies to accelerate product progression and enhance customer value. Your key responsibilities will include developing and executing 3-year product roadmaps aligned with the R&D strategy, managing and prioritizing the product backlog, driving PI planning events, and collaborating with cross-functional teams to deliver end-to-end solutions that meet user needs and business goals. Additionally, you will oversee the product lifecycle from ideation to launch, define key performance indicators and success metrics, and ensure fit-for-purpose governance is applied to the products within the domain. To excel in this role, you are expected to have a Bachelor's degree or higher in computer science or engineering with at least 15 years of experience. You should have a proven track record of delivering successful products and services in pharma R&D analytics, as well as experience working with Pharma R&D data across a global enterprise. Strong communication, relationship management, influencing skills, and experience with Agile and DevOps are essential. Experience in vendor management, data anonymization, data access workflows, and compliance with data privacy regulations will be beneficial. If you possess a Master's or Doctorate in Tech/Life Sciences, SAFE/Agile certification, experience in GxP Validation processes, design thinking, or event-driven architecture, it would be considered a plus. As an employer committed to Diversity and Inclusion, GSK encourages applicants to reach out for any necessary adjustments during the recruitment process. Join GSK in uniting science, technology, and talent to stay ahead of disease and positively impact the health of billions of people. As a global biopharma company, GSK focuses on preventing and treating diseases with vaccines, specialty, and general medicines, emphasizing the science of the immune system and new platform and data technologies. Join us on this journey to get Ahead Together and contribute to making GSK a place where people can thrive and excel.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional in the field of machine learning, you have designed, developed, and deployed real-world machine learning systems to drive the advancement of AI and scale machine learning solutions. Your expertise lies in operationalizing machine learning by creating scalable, robust, and secure products. You excel in leveraging technology to solve complex analytical problems and have a track record of: - Engaging in various stages of machine learning system design and development - Utilizing tools for machine learning pipeline management and monitoring such as MLflow, Pachyderm, Kubeflow, Seldon, and Grafana - Demonstrating proficiency in automation and deployment technologies like CircleCI, Jenkins, GitHub Actions, as well as infrastructure as code tools like Terraform and CloudFormation - Working with distributed processing frameworks like Spark and Dask, and having hands-on experience with cloud platforms and container technologies - Applying software engineering concepts and best practices including testing frameworks, packaging, API design, DevOps, DataOps, and MLOps - Showcasing strong problem-solving skills and a knack for quickly adapting to new technologies, trends, and frameworks Ideally, you hold an advanced degree in computer science, engineering, mathematics, or possess equivalent experience that has equipped you with the knowledge and skills required to excel in this role. If you are passionate about pushing the boundaries of machine learning and AI, and eager to contribute your expertise to a dynamic team, we encourage you to send in your resume to ai.architect@adyog.com.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a highly skilled and experienced Data Architect with expertise in cloud-based solutions. You will be responsible for designing, implementing, and optimizing data architecture to meet the organization's current and future needs. Your role will involve data modeling, transformation, governance, and hands-on experience with modern cloud platforms and tools such as Snowflake, Spark, Data Lakes, and Data Warehouses. Collaboration with cross-functional teams and stakeholders is crucial, and you will establish and enforce standards and guidelines across data platforms to ensure consistency, scalability, and best practices. You will be accountable for architecting and implementing scalable, secure, and high-performance cloud data platforms that integrate data lakes, data warehouses, and databases. Developing comprehensive data models to support analytics, reporting, and operational needs will be a key responsibility. Leading the design and execution of ETL/ELT pipelines to process and transform data efficiently using tools like Talend, Matillion, SQL, BigData, Hadoop, AWS EMR, and Apache Spark is essential. You will integrate diverse data sources into cohesive and reusable datasets for business intelligence and machine learning purposes. Establishing, documenting, and enforcing standards and guidelines for data architecture, data modeling, transformation, and governance across all data platforms will be part of your role. Ensuring consistency and best practices in data storage, integration, and security throughout the organization is critical. You will establish and enforce data governance standards to ensure data quality, security, and compliance with regulatory requirements, implementing processes and tools to manage metadata, lineage, and data access controls. Your expertise will be utilized in utilizing Snowflake for advanced analytics and data storage needs, optimizing performance and cost efficiency. Leveraging modern cloud platforms to manage data lakes and ensure seamless integration with other services is also a key responsibility. Collaboration with business stakeholders, data engineers, and analysts to gather requirements and translate them into technical designs is essential, along with effectively communicating architectural decisions, trade-offs, and progress to both technical and non-technical audiences. Continuous improvement is part of your role, where you will stay updated on emerging trends in cloud and data technologies, recommending innovations to enhance the organization's data capabilities and optimizing existing architectures to improve scalability, performance, and maintainability. Your technical skills should include expertise in data modeling, data architecture design principles, Talend, Matillion, SQL, BigData, Hadoop, AWS EMR, Apache Spark, Snowflake, and cloud-based data platforms. Experience with data lakes, data warehouses, relational and NoSQL databases, data transformation techniques, ETL/ELT pipelines, DevOps/DataOps/MLOps tools, and standards and governance frameworks is necessary. You should have exceptional written and verbal communication skills to interact effectively with technical teams and business stakeholders. Ideally, you should have 5+ years of experience in data architecture focusing on cloud technologies, a proven track record of delivering scalable, cloud-based data solutions, and a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Preferred qualifications include certifications in Snowflake, AWS data services, any RDBMS/NoSQL, AI/ML, Data Governance, familiarity with machine learning workflows and data pipelines, and experience working in Agile development environments.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
About Credit Saison India: Established in 2019, Credit Saison India (CS India) is one of the country's fastest-growing Non-Bank Financial Company (NBFC) lenders. With verticals in wholesale, direct lending, and tech-enabled partnerships with Non-Bank Financial Companies (NBFCs) and fintechs, CS India's tech-enabled model, along with underwriting capability, facilitates lending at scale, addressing India's significant credit gap, especially in underserved and underpenetrated population segments. Committed to long-term growth as a lender in India, CS India offers evolving financial solutions for MSMEs, households, individuals, and more. Registered with the Reserve Bank of India (RBI) and boasting an AAA rating from CRISIL and CARE Ratings, CS India currently operates through a network of 45 physical offices, serving 1.2 million active loans, managing an AUM of over US$1.5 billion, and employing approximately 1,000 individuals. As part of Saison International, a global financial company dedicated to uniting people, partners, and technology to create resilient and innovative financial solutions for positive impact, CS India contributes to the mission by being a transformative partner in creating opportunities and enabling people's dreams. Roles & Responsibilities: - Promote the DataOps approach to Data science, engineering, and analytics delivery processes to automate data provisioning, testing, monitoring, and streamline CI/CD. - Collaborate with data & ML leads to design and establish optimal data pipeline architecture for data solutions, including data science products. - Ensure scalability and performance of data pipelines, as well as establish and maintain services to connect data products. - Develop dashboards and tools for efficient monitoring of data and ML infrastructure, pipelines, ETL, and analytics delivery processes. - Implement an end-to-end event instrumentation and alerting system to detect anomalies in the system or data. - Assist in managing data and ML infrastructure, including upgrades, monitoring, and optimization. - Engage with IT DevOps engineers and participate in enterprise DevOps activities. - Share knowledge on infrastructure and data standards with other developers, promoting engineering best practices. - Contribute to innovative POCs with data & engineering teams. - Stay adaptable to new technology approaches to leverage new technologies effectively. Required Skills & Qualifications: - Strong problem-solving skills, clear communication, and a positive contribution to a DevOps/DataOps culture. - Knowledge of the latest DevOps tools and practices. - Experience with data pipelines within AWS (Glue, DataPipeline, Athena, EMR, DMS, Spark). - Experience with Database Replication and databases like Aurora, MySQL, MariaDB, etc. - Proficiency in building CI/CD pipelines for containerized Java/Python codestack. - Familiarity with Git workflow. - Experience with applications deployed in AWS. - Proficiency in configuration management and provisioning tools (e.g., Ansible, CloudFormation, Terraform). - Knowledge of scripting languages like Bash, Python, JavaScript. - Orchestration/containerization using Docker and Kubernetes. - Basic understanding of data science & ML engineering. - Bachelor's Degree in computer science or a related field, or a Big Data Background from top-tier universities. - Experience: 10+ years of relevant experience.,
Posted 2 weeks ago
6.0 - 8.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Withum is a place where talent thrives - where who you are matters. Its a place of endless opportunities for growth. A place where entrepreneurial energy plus inclusive teamwork equals exponential results. Withum empowers clients and our professional staff with innovative tools and solutions to address their accounting, tax and overall business management and operational needs. As a US nationally ranked Top 25 firm, we recruit only the best and brightest people with a genuine passion for the business. We are seeking an experienced Lead Consultant Data Engineering with a strong background in consulting services and hands-on skills in building modern, scalable data platforms and pipelines. This is a client-facing, delivery-focused role. Please note that this position is centered around external client delivery and is not part of an internal IT or product engineering team. This is a foundational hire. You will be responsible for delivering hands-on client work, support for our proprietary data products, and building the team underneath you. Withums brand is a reflection of our people, our culture and our strength. Withum has become synonymous with teamwork and client service excellence. The cornerstone of our success can truly be accredited to the dedicated professionals who work here every day, easy to work with a sense of purpose and caring for their co-workers and whose mission is to help our clients grow and thrive. But our commitment goes beyond our clients as we continue to live the Withum Way, promoting personal and professional growth for all team members, clients, and surrounding communities. How Youll Spend Your Time: Architect, implement, and optimize data transformation pipelines, data lakes, and cloud-native warehouses for mid- and upper mid-market clients. Deliver hands-on engineering work across client environments building fast, scalable, and well-documented pipelines that support both analytics and AI use cases. Lead technical design and execution using tools such as Tableau, Microsoft Fabric, Synapse, Power BI, Snowflake, and Databricks. Also have a good hands-on familiarity with SQL Databases. Optimize for sub-50GB datasets and local or lightweight cloud execution where appropriate minimizing unnecessary reliance on cluster-based compute. Collaborate with subject-matter experts to understand business use cases prior to designing data model. Operate as a client-facing consultant: conduct discovery, define solutions, and lead agile project delivery. Switch context rapidly across 23 active clients or service streams in a single day. Provide support for our proprietary data products as needed. Provide advisory and strategic input to clients on data modernization, AI enablement, and FP&A transformation efforts. Deliver workshops, demos, and consultative training to business and technical stakeholders. Ability to implement coding modifications to pre-existing code/procedures in a manner that results in a validated case study (i.e. if done properly, the result will be xyz and the total amount will reconcile to abc). Take full ownership of hiring, onboarding, and mentoring future data engineers and analysts within the India practice. During bench time, contribute to building internal data products and tooling powering our own consulting operations (e.g., utilization dashboards, delivery intelligence, practice forecasting). Help define and scale delivery methodology, best practices, and reusable internal accelerators for future engagements. Ability to communicate openly about conflicting deadlines to ensure prioritization aligns with client expectations, with ample time to reset client expectations as needed. Ensure coding is properly commented to help explain logic or purpose behind more complex sections of code. Requirements: 6+ years of hands-on experience in data engineering roles, at least 3+ years in a consulting or client delivery environment. Proven ability to context-switch, self-prioritize, and communicate clearly under pressure. Demonstrated experience owning full lifecycle delivery, from architecture through implementation and client handoff. Strong experience designing and implementing ETL / ELT pipelines, preferably in SQL-first tools (e.g., dbt core, SQLMesh, DuckDB). Experience with Microsoft SQL server / SSIS for maintenance and development of ETL processes. Real-world experience with SQL Databases, Databricks, Snowflake, and/or Synapse and a healthy skepticism of when to use them. Deep understanding of data warehousing, data lakes, data modeling, and incremental processing. Proficient in Python for ETL scripting, automation, and integration work. Experience with dbt core a comparable tool such as SQLMesh, Dataform, etc, in production environments. Strong practices around data testing, version control, documentation, and team-based dev workflows. Working knowledge of Power BI, Tableau, Looker, or similar BI tools enough to support downstream teams but not as your primary skillset. Experience building platforms for AI/ML workflows or supporting agentic architectures. Familiarity with Microsoft Fabric&aposs Lakehouse implementation, Delta Lake, Iceberg, and Parquet. Background in DataOps, CI/CD for data pipelines, and metadata management. Microsoft certifications (e.g., Azure Data Engineer, Fabric Analytics Engineer) are a plus Website: www.withum.com Withum will not discriminate against any employee or applicant for employment because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, genetic information, disability or because he or she is a protected veteran. Show more Show less
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You are an experienced Data Engineering expert with a focus on Azure Data Factory (ADF). In this role, you will be responsible for designing and implementing end-to-end data solutions using ADF. Your primary tasks will include collaborating with stakeholders to gather requirements and develop robust pipelines that support analytics and business insights. Your key responsibilities will involve designing and implementing complex data pipelines within Azure Data Factory. You will work on integrating data from various sources such as on-prem databases, APIs, and cloud storage. Additionally, you will develop ETL/ELT strategies to manage both structured and unstructured data effectively. Supporting data transformation, cleansing, and enrichment processes will also be a part of your role. Furthermore, you will be required to implement logging, alerting, and monitoring mechanisms for ADF pipelines. Collaboration with architects and business analysts to comprehend data requirements will be crucial. Writing and optimizing complex SQL queries for performance optimization will also be an essential aspect of your responsibilities. To excel in this role, you should possess strong hands-on experience with ADF, specifically in pipeline orchestration and data flows. Experience with Azure Data Lake, Azure Synapse Analytics, and Blob Storage will be beneficial. Proficiency in SQL and performance tuning is a must. Additionally, knowledge of Azure DevOps and CI/CD practices, along with a good understanding of DataOps and Agile environments, will be advantageous. If you meet these qualifications and are excited about this opportunity, please share your resume with us at karthicc@nallas.com. We look forward to potentially working with you in Coimbatore in this hybrid role that offers a stimulating environment for your Data Engineering expertise.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a part of Seosaph-infotech, you will play a crucial role in independently completing conceptual, logical, and physical data models for various supported platforms. Your responsibilities will include governing data design and modeling documentation, developing a deep understanding of business domains, driving collaborative reviews of data model design, and showcasing expertise in data at various levels such as low-latency, relational, and unstructured data stores. Your expertise will be utilized in developing reusable data models based on cloud-centric, code-first approaches, partnering with the data stewards team for data discovery, supporting data analysis and requirements gathering, assisting with data planning and transformation, and creating Source to Target Mappings (STTM) for ETL and BI. To excel in this role, you should possess expertise in data modeling tools, experience with MPP database technologies, familiarity with version control systems and deployment tools, and working knowledge of agile development practices. Additionally, experience with metadata management, data lineage, and data glossaries will be advantageous. This position offers the opportunity to work remotely and requires familiarity with SAP data models, especially in the context of HANA and S/4HANA, as well as experience with Retail Data like IRI and Nielsen. Join our dynamic team at Seosaph-infotech and contribute to crafting innovative solutions that drive business objectives and maximize reuse while collaborating with stakeholders to deliver exceptional data-driven products.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Database Designer/Administrator with Erwin experience, you will be responsible for designing, managing, optimizing, and securing databases to ensure high availability, performance, and compliance for TnD Project applications. Your primary focus will be on operational database management, data modeling, automation, backup strategies, query optimization, and system observability. You will collaborate closely with Database Architects, DevOps Teams, and Security Engineers to achieve the following objectives: Design and manage logical and physical data models using Erwin Data Modeler to ensure scalable and optimized database structures. Work with Database Architects to implement schema design changes, define data relationships, and optimize indexing strategies. Implement high availability (HA) and disaster recovery (DR) strategies for mission-critical databases to ensure data resilience and continuity during unforeseen events. Monitor and optimize database performance, indexing, and query execution by utilizing advanced analytics tools for proactive analysis and tuning. Automate database provisioning, patching, and maintenance workflows using Infrastructure as Code (IaC) to streamline operational tasks and enhance efficiency. Ensure security best practices and compliance with industry standards such as SOC2, GDPR, and internal IT audit policies by implementing access control, encryption measures, and regular security patching. In addition, you will be expected to work collaboratively with Database Architects, DevOps Engineers, and Security Teams to implement scalable database solutions, support developers in optimizing SQL queries, manage schema changes efficiently, and ensure compliance with regulatory and IT policies. Your success in this role will be measured by your ability to optimize database performance and reliability for TnD applications, establish secure and compliant data management strategies, automate database maintenance workflows, and proactively monitor and optimize costs for efficient resource utilization. If you have 5+ years of experience in database technologies such as AWS Aurora (PostgreSQL), Oracle DB, Redis, and DynamoDB, along with proficiency in tools like Erwin Data Modeler, Lucidchart, and query performance monitoring tools, we encourage you to apply for this challenging opportunity in Bangalore with a leading IT company. Kindly share your resume along with your current CTC, expected CTC, and notice period for our review. Thank you for considering this role. Best regards, Tanushree Kumar Human Resources,
Posted 2 weeks ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of S enior Manager , Tech Solutions Architect (Visualization) ! As a Tech Solutions Architect ( Vis u alization Specialised ) , you will design scalable data and analytics solutions across complex client landscapes, leveraging expertise in cloud platforms, Data Ops, and modern data ecosystems. With a consultative approach, you will craft reference architectures, drive innovation, and enable business value through agile and efficient strategies. Additionally, you will shape large outsourcing and right shoring engagements, aligning global delivery models with client objectives to ensure operational excellence and long-term success. Responsibilities Collaborate with clients to understand challenges and design innovative data solutions. Develop scalable reference architectures that ensure feasibility and deliverability. Advocate for Data Ops practices and cloud-native designs to optimize data strategies. Shape outsourcing and right shoring models that align with client objectives . Create technology roadmaps focused on achievable and scalable solutions. Support presales efforts with proposals, client presentations, and impactful PoCs . Ensure proposed solutions are practical, cost-effective, and delivery ready . Provide thought leadership on emerging trends and future-proofing data strategies. Align stakeholders on solution scope, deliverability, and long-term value creation. Mentor teams on best practices in solution design and presales strategy. Qualifications we seek in you! Minimum Q ualifications / Skills Experience: Relevant years in cloud, data, and analytics solutioning with a focus on presales and solution architecture. Bachelor&rsquos Degree in Computer Science, IT, Engineering, Data Science, or related field Cloud Certifications (Azure / AWS / GCP) Data & Analytics Certifications (e.g., Snowflake, Databricks) BI Certifications (Tableau , Power BI ) Advanced Data Certifications (Google Cloud, AWS Big Data) Architecture Certifications (TOGAF / Zachman , Salesforce Certified Technical Architect ) are preferred. Cloud Platforms: Hands-on expertise with at least one major cloud technology (Azure, AWS, or GCP). DataOps and Agile Practices: Familiarity with DataOps methodologies to streamline data pipelines and promote agile workflows. Outsourcing and Right shoring : Experience in defining outsourcing and right shoring strategies for global delivery models. Data Platforms: Proven ability to architect and implement solutions using platforms like Snowflake, Databricks, or Microsoft Fabric. BI Tools: Practical experience with business intelligence tools such as Tableau, Power BI , ThoughtSpot or Qlik. Reference Architectures: Demonstrated ability to translate business requirements into scalable and actionable reference architectures. Preferred Q ualifications / Skills Innovation Leadership: Track record of driving technology innovation and aligning solutions with long-term client goals. System Design Leadership: Expertise in designing complex system components, interfaces, and infrastructure for diverse client landscapes. Thought Leadership: Ability to stay ahead of technology trends and advocate for emerging solutions in client engagements. Community Building: Experience mentoring teams and fostering collaboration within architecture and engineering communities. Stakeholder Collaboration: Strong skills in engaging with business partners to understand priorities and define technology strategies. Governance and Strategy: Experience developing principles, policies, and governance models to align architecture with business objectives . Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
10.0 - 15.0 years
30 - 32 Lacs
bengaluru
Work from Office
Sales Director / Senior BDM - Data Engineering/Analytics - USA Exp: 10+ Years Industry: Digital, IT Services, Data Domain (End-Clients): Technology, E-Commerce /Retail, BFSI, Healthcare or Manufacturing. Functional Area: Digital Solutions: DevOps, Cloud, Data, DataOps, AI/ML, Analytics, Mobility, Application Development, Testing Solutions. Must Have: Travelled to US Sales Strategy and Planning Account Management Data Mining & Analytics Relationship Building Data-driven Decision Making Customer Needs Analysis Team Leadership Cloud Solution selling Responsibilities: Data centric solutions/use cases like Customer 360, Data Analytics, BI Reporting/Dashboards, Data Platform, Document AI along with large Data Migration projects sold to enterprises. Develop strategic account plans aligned with client goals and company objectives. Identify opportunities for upselling and cross-selling data and analytics products and services to maximize revenue from existing clients. Stay up-to-date with the company's data and analytics products and services. Provide expert consultation to clients on how to best leverage data and analytics to drive business outcomes and achieve their goals. Coordinate the delivery of data and analytics projects for clients, ensuring projects are completed on time, within scope, and within budget. Serve as a liaison between clients and internal teams to ensure clear communication and alignment throughout the project lifecycle. Developing comprehensive account strategies aligned with both client objectives and organizational goals to maximize revenue opportunities and increase client satisfaction. Develop and maintain strong relationships with existing clients, ensuring customer onboarding, deep account mining, ensuring customer centricity and success while enhancing their satisfaction through delivering on the customer needs. Collaborated with cross-functional teams, including data scientists and analysts, to develop innovative solutions and address client needs effectively. Working closely with CXOs to understand their long-term organizational vision and their quarterly and annual business objectives and identifying value drivers
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
bengaluru
Work from Office
Sales Director / Senior BDM - Data Engineering/Analytics - USA Exp: 10+ Years Industry: Digital, , Data Domain (End-Clients): Technology, E-Commerce /Retail, BFSI, Healthcare or Manufacturing, Functional Area: Digital Solutions: DevOps, Cloud, Data, DataOps, AI/ML, Analytics, Mobility, Application Development, Testing Solutions, Location: Bangalore / USA Education- UG: Computer / IT / Electronics Graduate (BE /B Tech / BS) PG: MBA / MS is not necessary, but advantage Must Have: Travelled to US Sales Strategy and Planning Account Management DataMining & Analytics Relationship Building Data-driven Decision Making Customer Needs Analysis Team Leadership Cloud Solution selling Responsibilities : Data centric solutions/use cases like Customer 360, Data Analytics, BI Reporting/Dashboards, Data Platform, DocumentAIalong with large Data Migration projects sold to enterprises. Develop strategic account plans aligned with client goals and company objectives. Identify opportunities for upselling and cross-selling data and analytics products and services to maximize revenue from existing clients. Stay up-to-date with the company's data and analytics products and services. Provide expert consultation to clients on how to best leverage data and analytics to drive business outcomes and achieve their goals. Coordinate the delivery of data and analytics projects for clients, ensuring projects are completed on time, within scope, and within budget. Serve as a liaison between clients and internal teams to ensure clear communication and alignment throughout the project lifecycle. Developing comprehensive account strategies aligned with both client objectives and organizational goals to maximize revenue opportunities and increase client satisfaction. Develop and maintain strong relationships with existing clients, ensuring customer onboarding, deep account mining, ensuring customer centricity and success while enhancing their satisfaction through delivering on the customer needs. Collaborated with cross-functional teams, including data scientists and analysts, to develop innovative solutions and address client needs effectively. Working closely with CXOs to understand their long-term organizational vision and their quarterly and annual business objectives and identifying value drivers
Posted 3 weeks ago
9.0 - 14.0 years
9 - 14 Lacs
hyderabad, telangana, india
On-site
We are seeking a strategic leader to fill the role of VP, AI and Engineering . This individual will be responsible for leading and scaling a best-in-class AI, software, data, and quality engineering organization in India and across a global network. The role is accountable for delivering AI-infused applications and technology products that accelerate medicine and vaccine development and supercharge internal capabilities. Roles and Responsibilities Develop and Scale the Engineering Organization: Attract, develop, and retain engineering talent across various disciplines, including AI, software, data analytics, and quality engineering. Manage and scale a team of technology professionals, ensuring the right talent mix to meet business demands. Upskill the organization on new technologies and manage strategic third-party vendors for talent and capacity. Implement best practices and standards for all engineering disciplines. Oversee High-Performing Technology Delivery: Partner with digital and tech product leaders to understand priorities, manage demand, and maintain product roadmaps. Staff engineering resources to deliver prioritized initiatives. Drive DevOps, DataOps, and MLOps platforms to enhance engineering productivity and automated testing. Provide Regional Tech Leadership: Lead and manage the day-to-day operations of the site-based team, ensuring alignment with global strategic objectives. Oversee end-to-end technology projects, including software development and product delivery. Monitor industry trends and emerging technologies to ensure the site remains competitive and innovative. Foster a culture of collaboration, innovation, and continuous improvement. Manage Stakeholder Communication and Risk: Maintain strong relationships with senior leadership, product teams, and other global stakeholders. Provide regular updates on performance, risks, and opportunities. Ensure the organization complies with relevant legal, regulatory, and company policies. Skills Required Experience in technology or operations leadership roles, specifically managing a tech team in a regional or similar market. A proven track record of leading a technology organization within a pharma services or life sciences company (e.g., CRO, professional services, biotech/biopharma). Proven experience in leading cross-functional teams and delivering complex, global, end-to-end technology projects. Experience leveraging data, analytics, and AI to develop new products and services. The ability to transform legacy technology and digital teams into a highly efficient, disciplined, delivery-oriented organization. Experience managing both technical and operational aspects of a global business with teams in different geographic locations. Experience leading a technology organization that provides both product development and SaaS solutions with a broad range of technologies such as Python, Java, Apex, Databricks, Workday, Oracle Fusion, ServiceNow, SalesForce, Veeva CRM, Veeva Vault Clinical, Microsoft Azure, AWS, and Oracle OCI . Strong leadership and team-building abilities. Excellent communication and interpersonal skills. Deep understanding of modern software, AI, and data development methodologies, including Agile, DevOps, DataOps, and MLOps . Strong business acumen, including the ability to manage budgets, resources, and operational performance. Experience in a global or multi-site organization is highly desirable.
Posted 4 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At Capgemini Invent, we believe that difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. You act as a contact person for our customers and advise them on data-driven projects. You are responsible for designing viable architectures based on Microsoft Azure, AWS, Snowflake, Google (or similar), and implementing analytics. You are responsible for architecture topics and solution scenarios in the areas of Cloud Data Analytics Platform, Data Engineering, Analytics, and Reporting. Experience in Cloud and Big Data architecture, DevOps, Infrastructure as a code, DataOps, MLOps, business development (as well as your support in the proposal process), data warehousing, data modeling, and data integration for enterprise data environments. Experience with ETL tools primarily Talend and/or any other Data Integrator tools (Open source / proprietary), extensive experience with SQL and SQL scripting (PL/SQL & SQL query tuning and optimization) for relational databases such as PostgreSQL, Oracle, Microsoft SQL Server, and MySQL etc., and on NoSQL like MongoDB and/or document-based databases. Experience in data analysis, modeling (logical and physical data models), and design specific to a data warehouse/Business Intelligence environment (normalized and multi-dimensional modeling). Excellent written, oral, and interpersonal communication skills with the ability to communicate design solutions to both technical and non-technical audiences. Ideally, experience in agile methods such as safe, scrum, etc., and experience in programming languages like Python, JavaScript, Java/Scala etc. Provides data services for enterprise information strategy solutions - works with business solutions leaders and teams to collect and translate information requirements into data to develop data-centric solutions. Design and develop modern enterprise data-centric solutions (e.g., DWH, Data Lake, Data Lakehouse) and governance solutions. We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain a healthy work-life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud, and data, combined with its deep industry expertise and partner ecosystem.,
Posted 4 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At Capgemini Invent, we believe that difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our clients" challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. You will act as a contact person for our customers and advise them on data-driven projects. You will be responsible for designing viable architectures based on Microsoft Azure, AWS, Snowflake, Google (or similar), and implementing analytics. You will be responsible for architecture topics and solution scenarios in the areas of Cloud Data Analytics Platform, Data Engineering, Analytics, and Reporting. The ideal candidate will have experience in Cloud and Big Data architecture, DevOps, Infrastructure as Code, DataOps, MLOps, business development, data warehousing, data modeling, and data integration for enterprise data environments. Experience with ETL tools primarily Talend and/or any other Data Integrator tools, extensive experience with SQL and SQL scripting (PL/SQL & SQL query tuning and optimization) for relational databases such as PostgreSQL, Oracle, Microsoft SQL Server, MySQL, etc., and on NoSQL like MongoDB and/or document-based databases. Experience in data analysis, modeling (logical and physical data models), and design specific to a data warehouse/Business Intelligence environment. Your role will require excellent written, oral, and interpersonal communication skills with the ability to communicate design solutions to both technical and non-technical audiences. Ideally, you will have experience in agile methods such as SAFe, Scrum, etc., and experience in programming languages like Python, JavaScript, Java/Scala. You will provide data services for enterprise information strategy solutions, work with business solutions leaders and teams to collect and translate information requirements into data to develop data-centric solutions. Design and develop modern enterprise data-centric solutions (e.g., DWH, Data Lake, Data Lakehouse), and governance solutions. We recognize the significance of flexible work arrangements to provide support. Be it remote work or flexible work hours, you will get an environment to maintain a healthy work-life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world, creating tangible impact for enterprises and society. With a strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud, and data, combined with its deep industry expertise and partner ecosystem.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the data engineering team at PepsiCo, you will play a crucial role in developing and overseeing data product build & operations. Your primary responsibility will be to drive a strong vision for how data engineering can proactively create a positive impact on the business. Working alongside a team of data engineers, you will build data pipelines, rest data on the PepsiCo Data Lake, and facilitate exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Your contributions will directly impact the design, architecture, and implementation of PepsiCo's flagship data products in areas such as revenue management, supply chain, manufacturing, and logistics. You will collaborate closely with process owners, product owners, and business users, operating in a hybrid environment that includes in-house, on-premise data sources as well as cloud and remote systems. Your responsibilities will include active contribution to code development, managing and scaling data pipelines, building automation and monitoring frameworks for data pipeline quality and performance, implementing best practices around systems integration, security, performance, and data management, and empowering the business through increased adoption of data, data science, and business intelligence. Additionally, you will collaborate with internal clients, drive solutioning and POC discussions, and evolve the architectural capabilities of the data platform by engaging with enterprise architects and strategic partners. To excel in this role, you should have at least 6+ years of overall technology experience, including 4+ years of hands-on software development, data engineering, and systems architecture. You should also possess 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools, along with expertise in SQL optimization, performance tuning, and programming languages like Python, PySpark, and Scala. Experience in cloud data engineering, specifically in Azure, is essential, and familiarity with Azure cloud services is a plus. You should have experience in data modeling, data warehousing, building ETL pipelines, and working with data quality tools. Proficiency in MPP database technologies, cloud infrastructure, containerized services, version control systems, deployment & CI tools, and Azure services like Data Factory, Databricks, and Azure Machine Learning tools is desired. Additionally, experience with Statistical/ML techniques, retail or supply chain solutions, metadata management, data lineage, data glossaries, agile development, DevOps, DataOps concepts, and business intelligence tools will be advantageous. A degree in Computer Science, Math, Physics, or related technical fields is preferred for this role.,
Posted 1 month ago
5.0 - 15.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Solutions Architect for a remote position based in India, you will be responsible for working on a new analytic platform, consolidation, and implementation. Your main role will involve identifying, leading, and delivering data analysis and architecture optimization. It is crucial that you have a minimum of 5-15 years of experience and are fluent in English to effectively communicate requirements. Your expertise in Datalake Infrastructure, Data warehousing, and Data Analytics Tools will be essential for this role. You must have a strong background in SQL Optimization, performance tuning, and development of procedures. Experience with Database technologies like SQL Oracle or Informatica is also required. Additionally, you should be familiar with Agile based development practices, including DevOps and DataOps. As the ideal candidate, you will hold a bachelor's degree in a relevant field or possess equivalent experience. Strong analytical and problem-solving skills are necessary for this role. Excellent written and verbal communication skills are essential for collaborating with cross-functional teams within an Agile methodology. You should also have a strong desire to work in a cross-functional environment. In this role, you will lead the design and implementation of data analytics platforms with a focus on data lake infrastructure and data warehousing solutions. Your responsibilities will include optimizing SQL queries, improving database performance, and developing stored procedures to enhance system efficiency. You will work extensively with database technologies such as SQL, Oracle, and Informatica. Furthermore, you will drive architecture decisions and provide technical leadership across multiple projects, typically leading 3-4 projects as the Technical Architect. Collaboration with cross-functional teams in Agile environments will be key, leveraging DevOps and DataOps practices for continuous delivery and integration. Your ability to analyze complex problems, debug issues effectively, and deliver scalable solutions aligned with business needs will play a crucial role in the success of the projects. Overall, as a Solutions Architect, you will be expected to communicate technical concepts clearly and effectively to both technical and non-technical stakeholders, ensuring a smooth execution of projects and meeting business requirements.,
Posted 1 month ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
As a Reporting and Analytics Lead at HSBC, you will play a crucial role in managing a cross-functional team, strategic external suppliers, and business stakeholders. You will set clear development priorities, performance expectations, and accountability measures for supplier teams. Proactively managing risks, issues, and changes in scope will be essential to ensure alignment with business objectives. Your responsibilities will include reporting regularly on project status, updating Jira and Confluence, preparing project plans, providing mentoring and guidance to team members, and fostering a culture of collaboration and accountability focused on delivery excellence. You will oversee the ingestion and transformation of data from automated feeds and manual sources, implement robust data validation processes, and lead the transformation of legacy data into scalable, modular platforms. Driving automation, reducing manual interventions, and defining enterprise data standards will be key aspects of your role. Additionally, you will design fault-tolerant ETL/ELT pipelines, ensure data integrity across all stages of analysis, and mitigate risks associated with decision-support systems through validation and testing. In this role, you will act as a strategic partner in gathering and refining business requirements, conducting impact assessments, and translating business needs into clear documentation. You will build internal capability around data standardization, automation best practices, and documentation, ensuring that solutions meet both functional and non-functional business requirements. Engaging with business leaders and technical teams, you will facilitate decision-making, alignment, and lead workshops, presentations, and status meetings with diverse audiences. To be successful in this role, you should possess a Master's degree in Business, Computer Science, Engineering, or related fields, along with 12+ years of experience in project management, enterprise data infrastructure, or engineering roles. Strong background in Business Analytics, data standards, governance frameworks, and familiarity with data pipeline tooling, automation practices, and version control are required. Hands-on experience with data transformation tools, basic knowledge of Python and SQL scripting, and relevant certifications such as PMP, PRINCE2, Agile/Scrum, or CBAP are preferred. A background in Financial Services, Banking, or Enterprise IT environment is advantageous, as well as deep expertise in SQL Server, GCP platform, and large-scale ETL/ELT architecture. Your combination of technical skills, analytical acumen, collaborative abilities, and leadership mindset will enable you to contribute effectively to enhancing operational excellence and informed decision-making within the organization. Join HSBC and make a real impression by leveraging your expertise in reporting and analytics to drive impactful outcomes.,
Posted 1 month ago
6.0 - 13.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are required to have over 13 years of experience in IT, with at least 6 years of experience in roles such as Technical Product Manager, Technical Program Manager, or Delivery Lead. Your responsibilities will involve overseeing the end-to-end delivery of data platform, AI, BI, and analytics projects to ensure they align with business objectives and stakeholder expectations. You will be responsible for developing and maintaining project plans, roadmaps, and timelines for data ingestion, transformation, governance, AI/ML models, and analytics deliverables. Leading cross-functional teams, including data engineers, data scientists, BI analysts, architects, and business stakeholders, will be part of your role to deliver high-quality and scalable solutions within the set budget and timeframe. You will define, prioritize, and manage product and project backlogs focusing on data pipelines, data quality, governance, AI services, and BI dashboards. Collaboration with business units to gather requirements and translate them into actionable user stories and acceptance criteria will be essential. Your responsibilities will extend to overseeing BI and analytics areas, ensuring data quality, lineage, security, and compliance requirements are incorporated throughout the project lifecycle. Coordinating UAT, performance testing, and user training, as well as acting as the primary point of contact for project stakeholders, will be crucial. Additionally, you will facilitate agile ceremonies, drive post-deployment monitoring, and optimize data and BI solutions to meet evolving business needs and performance standards. Primary Skills: - 13+ years of IT experience with 6+ years in relevant roles - Hands-on experience in data engineering, data pipelines, ETL processes, and data integration workflows - Proven track record managing data engineering, analytics, or AI/ML projects end to end - Strong understanding of modern data architecture and cloud platforms (Azure, AWS, GCP) - Proficiency in Agile methodologies, sprint planning, and backlog grooming - Excellent communication and stakeholder management skills Secondary Skills: - Background in computer science, engineering, data science, or analytics - Experience with data engineering tools and services in AWS, Azure & GCP - Understanding of BI, Analytics, LLMs, RAG, prompt engineering, or agent-based AI systems - Experience leading cross-functional teams in matrixed environments - Certifications such as PMP, CSM, SAFe, or equivalent are a plus Role: Technical Project Manager (Data) Location: Trivandrum/Kochi Close Date: 08-08-2025,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |