Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
???? Were Hiring: Senior Data Engineer 7+ Years Experience ???? Location: Gurugram, Haryana, India ???? Duration: 6 Months C2H (Contract to Hire) ???? Apply Now: [HIDDEN TEXT] ???? What Were Looking For: ? 7+ years of experience in data engineering ? Strong expertise in building scalable, robust batch and real-time data pipelines ? Proficiency in AWS Data Services (S3, Glue, Athena, EMR, Kinesis, etc.) ? Advanced SQL skills and deep knowledge of file formats: Parquet, Delta Lake, Iceberg, Hudi ? Hands-on experience with CDC patterns ? Experience with stream processing (Apache Flink , Kafka Streams) and distributed frameworks like PySpark ? Expertise in Apache Airflow for workflow orchestration ? Solid foundation in data warehousing concepts and experience with both relational and NoSQL databases ? Strong communication and problem-solving skills ? Passion for staying up to date with the latest in the data tech landscape Show more Show less
Posted 2 days ago
3.0 - 4.0 years
3 - 4 Lacs
Hyderabad, Telangana, India
On-site
Job Summary Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. Experience with Azure ADLS, Apache Parquet, Iceberg, Kubeflow and Airflow
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Lead Engineer specializing in Python and Spark within the AWS environment, you will have the crucial responsibility of designing, building, and maintaining robust, scalable, and efficient ETL pipelines. Your primary focus will be on ensuring alignment with data lakehouse architecture on AWS and optimizing workflows using AWS services such as Glue, Glue Data Catalog, Lambda, and S3. Your expertise will play a key role in implementing data quality and governance frameworks to maintain reliable and consistent data processing across the platform. Collaboration with cross-functional teams to gather requirements, provide technical insights, and deliver high-quality data solutions will be essential in your role. You will drive the migration of existing data processing workflows to the lakehouse architecture by leveraging Iceberg capabilities and establishing best practices for coding standards, design patterns, and system architecture. Your leadership will extend to technical discussions, mentoring team members, and fostering a culture of continuous learning and innovation. Ensuring that all solutions are secure, compliant, and meet company and industry standards will be a top priority. Key relationships in this role will include interactions with Senior Management and Architectural Group, Development Managers, Team Leads, Data Engineers, Analysts, and Agile team members. Your extensive expertise in Python and Spark, along with strong experience in AWS services, data quality and governance, and scalable architecture, will be crucial for success in this position. Desired skills include familiarity with additional programming languages such as Java, experience with serverless computing paradigms, and knowledge of data visualization or reporting tools for effective stakeholder communication. Certification in AWS or data engineering would be beneficial. A bachelor's degree in Computer Science, Software Engineering, or a related field is helpful for this role, although equivalent professional experience or certifications will also be considered. Joining our team at LSEG means being part of a global financial markets infrastructure and data provider dedicated to driving financial stability, empowering economies, and enabling sustainable growth. Our culture, built on values of Integrity, Partnership, Excellence, and Change, guides our decision-making and actions daily. We value individuality, encourage new ideas, and are committed to sustainability across our global business. You will have the opportunity to contribute to re-engineering the financial ecosystem to support sustainable economic growth and the just transition to net zero. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives to ensure a collaborative and inclusive work environment. Please take a moment to review our privacy notice, which outlines how personal information is handled by London Stock Exchange Group (LSEG) and your rights as a data subject. If you are representing a Recruitment Agency Partner, it is essential to ensure that candidates applying to LSEG are aware of this privacy notice.,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
What are we looking for Must have experience with at least one cloud platform (AWS, GCP, or Azure) AWS preferred Must have experience with lakehouse-based systems such as Iceberg, Hudi, or Delta Must have experience with at least one programming language (Python, Scala, or Java) along with SQL Must have experience with Big Data technologies such as Spark, Hadoop, Hive, or other distributed systems Must have experience with data orchestration tools like Airflow Must have experience in building reliable and scalable ETL pipelines Good to have experience in data modeling Good to have exposure to building AI-led data applications/services Qualifications and Skills 26 years of professional experience in a Data Engineering role. Knowledge of distributed systems such as Hadoop, Hive, Spark, Kafka, etc. Show more Show less
Posted 3 days ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we&aposve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you&aposll make a valuable - and valued - contribution. We&aposre a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform Are you an expert with Big Data Technologies Have you looked under the hood of these systems Are you interested in Open Source If you answered Yes to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It&aposs important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company&aposs success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We&aposre independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you&aposll be part of a company that&aposs changing how the world watches TV.? We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn&apost real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002.? To learn more about Roku, our global footprint, and how we&aposve grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less
Posted 4 days ago
3.0 - 6.0 years
11 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Platform Engineer at our company, you will be a key member of the Data Platform team, leveraging your expertise in data platforms, data warehousing, and data administration. Your role will involve defining and aligning data platform strategies across the organization, ensuring optimal performance of our data lake and data warehouse environment to meet business needs effectively. Responsibilities: - Define and align Data Lake and Data Warehouse usage strategies organization-wide - Design, develop, and maintain Data Lake and Data Warehouse solutions - Perform data lake and data warehouse administration tasks including user management, security, and performance tuning - Collaborate with data architects, business stakeholders, and other teams to understand data requirements and establish guidelines and processes - Define cost attribution, optimization, and monitoring strategies for the data platform - Develop and maintain data models, schemas, and database objects - Monitor and optimize data lake and data warehouse performance for high availability and reliability - Stay updated with the latest advancements in data platforms and related technologies - Provide mentorship and guidance to junior team members Minimum Qualifications: - Bachelor's degree in Computer Science, Information Technology, or related field - 5+ years of experience in Data Platform engineering - 3+ years of hands-on experience with data lake and data warehouse - Experience with cloud platforms like AWS, Data Warehouse solutions like Snowflake - Experience with provisioning and maintenance of Spark, Presto, Kubernetes clusters - Familiarity with open table formats like Iceberg, metadata stores like HMS, GDC, etc. - Strong problem-solving skills and attention to detail - Excellent communication and collaboration skills Preferred Qualifications: - Snowflake Experience - Proficiency in coding languages such as Python - Familiarity with data visualization tools like Looker - Experience with Agile/Scrum methodologies Join us at Autodesk, where every day brings new opportunities to create amazing things with our software. Embrace our culture that values diversity, belonging, and innovation, and be part of a team that shapes the future. Discover a rewarding career that helps build a better world for all.,
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform developer at Barclays, you will play a crucial role in shaping the digital landscape and enhancing customer experiences. Leveraging cutting-edge technology, you will work alongside a team of engineers, business analysts, and stakeholders to deliver high-quality solutions that meet business requirements. Your responsibilities will include tackling complex technical challenges, building efficient data pipelines, and staying updated on the latest technologies to continuously enhance your skills. To excel in this role, you should have hands-on coding experience in Python, along with a strong understanding and practical experience in AWS development. Experience with tools such as Lambda, Glue, Step Functions, IAM roles, and various AWS services will be essential. Additionally, your expertise in building data pipelines using Apache Spark and AWS services will be highly valued. Strong analytical skills, troubleshooting abilities, and a proactive approach to learning new technologies are key attributes for success in this role. Furthermore, experience in designing and developing enterprise-level software solutions, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, Kinesis, and Glue Streaming will be advantageous. Effective communication and collaboration skills are essential to interact with cross-functional teams and document best practices. Your role will involve developing and delivering high-quality software solutions, collaborating with various stakeholders to define requirements, promoting a culture of code quality, and staying updated on industry trends. Adherence to secure coding practices, implementation of effective unit testing, and continuous improvement are integral parts of your responsibilities. As a Data Platform developer, you will be expected to lead and supervise a team, guide professional development, and ensure the delivery of work to a consistently high standard. Your impact will extend to related teams within the organization, and you will be responsible for managing risks, strengthening controls, and contributing to the achievement of organizational objectives. Ultimately, you will be part of a team that upholds Barclays" values of Respect, Integrity, Service, Excellence, and Stewardship, while embodying the Barclays Mindset of Empower, Challenge, and Drive in your daily interactions and work ethic.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform Engineer Lead at Barclays, your role is crucial in building and maintaining systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes. Your responsibility includes ensuring the accuracy, accessibility, and security of all data. To excel in this role, you should have hands-on coding experience in Java or Python and a strong understanding of AWS development, encompassing various services such as Lambda, Glue, Step Functions, IAM roles, and more. Proficiency in building efficient data pipelines using Apache Spark and AWS services is essential. You are expected to possess strong technical acumen, troubleshoot complex systems, and apply sound engineering principles to problem-solving. Continuous learning and staying updated with new technologies are key attributes for success in this role. Design experience in diverse projects where you have led the technical development is advantageous, especially in the Big Data/Data Warehouse domain within Financial services. Additional skills in enterprise-level software solutions development, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, and Kinesis are highly valued. Effective communication, collaboration with cross-functional teams, documentation skills, and experience in mentoring team members are also important aspects of this role. Your accountabilities will include the construction and maintenance of data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to deploy machine learning models. You will also be expected to contribute to strategy, drive requirements for change, manage resources and policies, deliver continuous improvements, and demonstrate leadership behaviors if in a leadership role. Ultimately, as a Data Platform Engineer Lead at Barclays in Pune, you will play a pivotal role in ensuring data accuracy, accessibility, and security while leveraging your technical expertise and collaborative skills to drive innovation and excellence in data management.,
Posted 1 week ago
3.0 - 5.0 years
12 - 16 Lacs
Thiruvananthapuram
Work from Office
Develop dimensional & non-dimensional data models for structured & semi-structured data Design & implement data models optimized for AWS &Snowflake Ensure efficient data transformations & storage by leveraging AWS Glue &CDK Required Candidate profile Min 3 to 5yr exp Expertise in AWS Data Architecture and AWS Data Lakehouse Candidates have to work with UK Clients, Work timings will be aligned with the client's requirements &may follow UK time
Posted 2 weeks ago
3.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Work from Office
AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Data pipeline development & orchestration using AWS Glue Strong programming skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 3 to 5 Years Client-facing and team leadership experience Candidates have to work with UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Join us as an AWS Developer at Barclays, where you will play a crucial role in supporting the successful delivery of Location Strategy projects. Your responsibilities will include ensuring that projects are completed within the planned budget, meeting quality standards, and adhering to governance protocols. You will be at the forefront of transforming our digital landscape, driving innovation, and striving for excellence to enhance our digital offerings and provide customers with exceptional experiences. As an AWS Developer, your key experience should encompass: - Proficiency in AWS cloud services like S3, Glue, Athena, Lake Formation, and CloudFormation. - Advanced skills in Python for data engineering and automation. - Familiarity with ETL frameworks, data transformation, and data quality tools. Additionally, highly valued skills may involve: - AWS Data Engineer certification. - Previous experience in the banking or financial services sector. - Knowledge of IAM and Permissions management in AWS cloud. - Experience with Databricks, Snowflake, Starburst, and Iceberg. Your performance will be evaluated based on essential skills crucial for success in this role, including risk and controls management, change and transformation capabilities, strategic thinking, and proficiency in digital and technology. You will also be assessed on job-specific technical skills relevant to the position. This position is located in Pune and aims to: - Develop and maintain systems for collecting, storing, processing, and analyzing data to ensure accuracy, accessibility, and security. - Build and manage data architectures pipelines for transferring and processing data effectively. - Design data warehouses and data lakes that handle data volumes, velocity, and security requirements. - Develop algorithms for processing and analyzing data of varying complexity and volumes. - Collaborate with data scientists to construct and deploy machine learning models. As an Assistant Vice President, you are expected to provide guidance, influence decision-making processes, contribute to policy development, and ensure operational efficiency. You will lead a team in executing complex tasks, set objectives, coach employees, evaluate performance, and determine reward outcomes. If you have leadership responsibilities, you must exhibit leadership behaviors such as listening, inspiring, aligning, and developing others. For individual contributors, the role involves leading collaborative assignments, guiding team members, identifying the need for specialized input, proposing new directions for projects, and consulting on complex issues. You will also be responsible for risk management, policy development, and ensuring compliance with governance standards. All colleagues are required to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as demonstrate the Barclays Mindset of Empower, Challenge, and Drive, guiding our behavior and decisions.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Lead Engineer specializing in Python and Spark within AWS, you will be tasked with designing, building, and maintaining robust, scalable, and efficient ETL pipelines. Your primary focus will be ensuring alignment with the data lakehouse architecture on AWS. You will leverage your extensive expertise in Python and Spark to develop and optimize workflows using AWS services such as Glue, Glue Data Catalog, Lambda, and S3. In this role, you will implement data quality and governance frameworks to guarantee reliable and consistent data processing across the platform. Collaborating with cross-functional teams, you will gather requirements, provide technical insights, and deliver high-quality data solutions. Your responsibilities will also include driving the migration of existing data processing workflows to the lakehouse architecture by leveraging Iceberg capabilities. As a key member of the team, you will establish and enforce best practices for coding standards, design patterns, and system architecture. Monitoring and improving system performance and data reliability through proactive analysis and optimization techniques will be essential. Additionally, you will lead technical discussions, mentor team members, and promote a culture of continuous learning and innovation. Your interactions will primarily involve senior management and the architectural group, development managers and team leads, data engineers and analysts, as well as agile team members. Therefore, excellent interpersonal skills, both verbal and written, will be crucial in articulating complex technical solutions to diverse audiences. To excel in this role, you must possess a consistent track record of designing and implementing complex data processing workflows using Python and Spark. Strong experience with AWS services such as Glue, Glue Data Catalog, Lambda, S3, and EMR is essential, with a focus on data lakehouse solutions. Deep understanding of data quality frameworks, data contracts, and governance standard processes will also be required. Furthermore, the ability to design and implement scalable, maintainable, and secure architectures using modern data technologies is crucial. Hands-on experience with Apache Iceberg and its integration within data lakehouse environments, along with expertise in problem-solving and performance optimization for data workflows, will be key skills for success in this role. Desirable skills include familiarity with additional programming languages such as Java, experience with serverless computing paradigms, and knowledge of data visualization or reporting tools for stakeholder communication. Certification in AWS or data engineering (e.g., AWS Certified Data Analytics, Certified Spark Developer) would be advantageous. A bachelor's degree in Computer Science, Software Engineering, or a related field is helpful, although equivalent professional experience or certifications will also be considered. By joining our dynamic organization at LSEG, you will have the opportunity to contribute to driving financial stability, empowering economies, and enabling sustainable growth, all while being part of a collaborative and creative culture that values diversity and sustainability.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
telangana
On-site
As the Vice President of Engineering at Teradata in India, you will be responsible for leading the software development organization for the AI Platform Group. This includes overseeing the execution of the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases. Your success in this role will be measured by your ability to build a world-class engineering culture, attract and retain technical talent, accelerate product delivery, and drive innovation that brings tangible value to customers. In this role, you will lead a team of over 150 engineers with a focus on helping customers achieve outcomes with Data and AI. Collaboration with key functions such as Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be essential to your success. You will also lead a regional team of up to 500 individuals, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. Collaboration with various stakeholders at regional and global levels will be a key aspect of your role. To be considered a qualified candidate for this position, you should have at least 10 years of senior leadership experience in product development or engineering within enterprise software product companies. Additionally, you should have a minimum of 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. You must have a proven track record of leading agentic AI development and scaling AI in a hybrid cloud environment, as well as experience with Agile and DevSecOps methodologies. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, modern data stack technologies, AI/ML infrastructure, enterprise security, and performance engineering is also crucial. A passion for open-source collaboration, building high-performing engineering cultures, and inclusive leadership is highly valued. Ideally, you should hold a Master's degree in engineering, Computer Science, or an MBA. At Teradata, we prioritize a people-first culture, offer a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our mission to empower our customers and drive innovation in the world of AI and data analytics.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
Join us as a Data Engineer at Barclays, where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Data Engineer, you should have experience with hands-on experience in Pyspark and a strong knowledge of Dataframes, RDD, and SparkSQL. You should also have hands-on experience in developing, testing, and maintaining applications on AWS Cloud. A strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) is essential. Additionally, you should be able to design and implement scalable and efficient data transformation/storage solutions using Snowflake. Experience in data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc., is required. Familiarity with using DBT (Data Build Tool) with Snowflake for ELT pipeline development is necessary. Advanced SQL and PL SQL programming skills are a must. Experience in building reusable components using Snowflake and AWS Tools/Technology is highly valued. Exposure to data governance or lineage tools such as Immuta and Alation is an added advantage. Knowledge of Orchestration tools such as Apache Airflow or Snowflake Tasks is beneficial, and familiarity with Abinitio ETL tool is a plus. Some other highly valued skills may include the ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. A good understanding of infrastructure setup and the ability to provide solutions either individually or working with teams is essential. Knowledge of Data Marts and Data Warehousing concepts, along with good analytical and interpersonal skills, is required. Implementing Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build data movement strategy is also important. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Meet the needs of stakeholders/customers through specialist advice and support. - Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. - Likely to have responsibility for specific processes within a team. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - Demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Manage own workload, take responsibility for the implementation of systems and processes within own work area and participate in projects broader than the direct team. - Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. - Provide specialist advice and support pertaining to own work area. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how all teams in the area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. - Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative/operational expertise. - Make judgements based on practice and previous experience. - Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. - Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day-to-day administrative requirements. - Build relationships with stakeholders/customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
telangana
On-site
As the Vice President of Engineering at Teradata, you will be responsible for leading the India-based software development organization within the AI Platform Group. Your main focus will be on executing the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases at scale. Success in this role will involve building a world-class engineering culture, attracting and retaining top technical talent, accelerating hybrid cloud-first product delivery, and driving innovation that brings measurable value to customers. You will be leading a team of over 150 engineers with the goal of helping customers achieve outcomes with Data and AI. Collaboration with Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be key aspects of your role. Additionally, you will work closely with a high-impact regional team of up to 500 people, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. To qualify for this position, you should have over 10 years of senior leadership experience in product development, engineering, or technology leadership within enterprise software product companies. You should also have at least 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. Experience in leading the development of agentic AI and scaling AI in a hybrid cloud environment is essential. Success in implementing and scaling Agile and DevSecOps methodologies, as well as modernizing legacy architectures into service-based systems, will be key qualifications. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, familiarity with modern data stack technologies, AI/ML infrastructure, enterprise security, data governance, and API-first design will be beneficial. Additionally, a track record of building high-performing engineering cultures, inclusive leadership teams, and a passion for open-source collaboration are desired qualities. A Masters degree in engineering, Computer Science, or an MBA is preferred for this role. At Teradata, we prioritize a people-first culture, embrace a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our dedication to fostering an equitable environment that celebrates individuals for all aspects of who they are.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Lead Engineer specializing in Python and Spark in AWS, you will be responsible for designing, building, and maintaining robust, scalable, and efficient ETL pipelines. Your primary focus will be on ensuring alignment with data lakehouse architecture on AWS and optimizing workflows using services such as Glue, Lambda, and S3. Collaborating with cross-functional teams, you will gather requirements, provide technical insights, and deliver high-quality data solutions. Your role will involve driving the migration of existing data processing workflows to the lakehouse architecture, leveraging Iceberg capabilities, and enforcing best practices for coding standards and system architecture. You will play a key role in implementing data quality and governance frameworks to ensure reliable and consistent data processing across the platform. Monitoring and improving system performance, optimizing data workflows, and ensuring all solutions are secure, compliant, and meet industry standards will be crucial aspects of your responsibilities. Leading technical discussions, mentoring team members, and fostering a culture of continuous learning and innovation are essential for this role. You will also maintain relationships with senior management, architectural groups, development managers, team leads, data engineers, analysts, and agile team members. Key Skills and Experience: - Extensive expertise in Python and Spark for designing and implementing complex data processing workflows. - Strong experience with AWS services such as Glue, Lambda, S3, and EMR, focusing on data lakehouse solutions. - Deep understanding of data quality frameworks, data contracts, and governance processes. - Ability to design and implement scalable, maintainable, and secure architectures using modern data technologies. - Hands-on experience with Apache Iceberg and its integration within data lakehouse environments. - Expertise in problem-solving, performance optimization, and Agile methodologies. - Excellent interpersonal skills with the ability to communicate complex technical solutions effectively. Desired Skills and Experience: - Familiarity with additional programming languages such as Java. - Experience with serverless computing paradigms. - Knowledge of data visualization or reporting tools for stakeholder communication. - Certification in AWS or data engineering (e.g., AWS Certified Data Analytics, Certified Spark Developer). Education and Certifications: - A bachelor's degree in Computer Science, Software Engineering, or a related field is helpful. - Equivalent professional experience or certifications will also be considered. Join us at LSEG, a leading global financial markets infrastructure and data provider, where you will be part of a dynamic organization across 65 countries. We value individuality, encourage new ideas, and are committed to sustainability, driving sustainable economic growth and inclusivity. Experience the critical role we play in re-engineering the financial ecosystem and creating economic opportunities while accelerating the transition to net zero. At LSEG, we offer tailored benefits including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
About Mindstix Software Labs: Mindstix accelerates digital transformation for the world's leading brands. We are a team of passionate innovators specialized in Cloud Engineering, DevOps, Data Science, and Digital Experiences. Our UX studio and modern-stack engineers deliver world-class products for our global customers that include Fortune 500 Enterprises and Silicon Valley startups. Our work impacts a diverse set of industries - eCommerce, Luxury Retail, ISV and SaaS, Consumer Tech, and Hospitality. A fast-moving open culture powered by curiosity and craftsmanship. A team committed to bold thinking and innovation at the very intersection of business, technology, and design. That's our DNA. Roles and Responsibilities: Mindstix is looking for a proficient Data Engineer. You are a collaborative person who takes pleasure in finding solutions to issues that add to the bottom line. You appreciate technical work by hand and feel a sense of ownership. You require a keen eye for detail, work experience as a data analyst, and in-depth knowledge of widely used databases and technologies for data analysis. Your responsibilities include: - Building outstanding domain-focused data solutions with internal teams, business analysts, and stakeholders. - Applying data engineering practices and standards to develop robust and maintainable solutions. - Being motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases. - Being a natural problem-solver and intellectually curious across a breadth of industries and topics. - Being acquainted with different aspects of Data Management like Data Strategy, Architecture, Governance, Data Quality, Integrity & Data Integration. - Being extremely well-versed in designing incremental and full data load techniques. Qualifications and Skills: - Bachelors or Master's degree in Computer Science, Information Technology, or allied streams. - 2+ years of hands-on experience in the data engineering domain with DWH development. - Must have experience with end-to-end data warehouse implementation on Azure or GCP. - Must have SQL and PL/SQL skills, implementing complex queries and stored procedures. - Solid understanding of DWH concepts such as OLAP, ETL/ELT, RBAC, Data Modelling, Data Driven Pipelines, Virtual Warehousing, and MPP. - Expertise in Databricks - Structured Streaming, Lakehouse Architecture, DLT, Data Modeling, Vacuum, Time Travel, Security, Monitoring, Dashboards, DBSQL, and Unit Testing. - Expertise in Snowflake - Monitoring, RBACs, Virtual Warehousing, Query Performance Tuning, and Time Travel. - Understanding of Apache Spark, Airflow, Hudi, Iceberg, Nessie, NiFi, Luigi, and Arrow (Good to have). - Strong foundations in computer science, data structures, algorithms, and programming logic. - Excellent logical reasoning and data interpretation capability. - Ability to interpret business requirements accurately. - Exposure to work with multicultural international customers. - Experience in the Retail/ Supply Chain/ CPG/ EComm/Health Industry is a plus. Who Fits Best - You are a data enthusiast and problem solver. - You are a self-motivated and fast learner with a strong sense of ownership and drive. - You enjoy working in a fast-paced creative environment. - You appreciate great design, have a strong sense of aesthetics and have a keen eye for detail. - You thrive in a customer-centric environment with the ability to actively listen, empathize and collaborate with globally distributed teams. - You are a team player who desires to mentor and inspire others to do their best. - You love expressing ideas and articulating well with strong written and verbal English communication and presentation skills. - You are detail-oriented with an appreciation for craftsmanship. Benefits: - Flexible working environment. - Competitive compensation and perks. - Health insurance coverage. - Accelerated career paths. - Rewards and recognition. - Sponsored certifications. - Global customers. - Mentorship by industry leaders. Location: This position is primarily based at our Pune (India) headquarters, requiring all potential hires to work from this location. A modern workplace is deeply collaborative by nature, while also demanding a touch of flexibility. We embrace deep collaboration at our offices with reasonable flexi-timing and hybrid options to our seasoned team members. Equal Opportunity Employer.,
Posted 3 weeks ago
1.0 - 4.0 years
12 - 16 Lacs
Gurugram
Hybrid
Primary Role Responsibilities: Develop and maintain data ingestion and transformation pipelines across on-premise and cloud platforms. Develop scalable ETL/ELT pipelines that integrate data from a variety of sources (i.e. form-based entries, SQL databases, Snowflake, SharePoint). Collaborate with data scientists, data analysts, simulation engineers and IT personnel to deliver data engineering and predictive data analytics projects. Implement data quality checks, logging, and monitoring to ensure reliable operations. Follow and maintain data versioning, schema evolution, and governance controls and guidelines. Help administer Snowflake environments for cloud analytics. Work with more senior staff to improve solution architectures and automation. Stay updated with the latest data engineering technologies and trends. Participate in code reviews and knowledge sharing sessions. Participate in and plan new data projects that impact business and technical domains. Required Qualifications: Bachelors or masters degree in computer science, data engineering, or related field. 1-3 years of experience in data engineering, ETL/ELT development, and/or backend software engineering. Demonstrated expertise in Python and SQL. Demonstrated experience working with data lakes and/or data warehouses (e.g. Snowflake, Databricks, or similar) Familiarity with source control and development practices (e.g Git, Azure DevOps) Strong problem-solving skills and eagerness to work with cross-functional globalized teams. Preferred Qualifications: Required qualification plus Working experience and knowledge of scientific and R&D workflows, including simulation data and LIMS systems. Demonstrated ability to balance operational support and longer-term project contributions. Experience with Java Strong communication and presentation skills. Motivated and self-driven learner
Posted 3 weeks ago
7.0 - 11.0 years
45 - 60 Lacs
Bengaluru
Remote
About the Role: The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. Youll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What Youll Do: Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What Youll Need: B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience with the Following is Desirable: Go Iceberg Pinot or other time-series/ OLAP-style database Jenkins Parquet Protocol Buffers/GRPC.
Posted 4 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 1 month ago
10.0 - 15.0 years
10 - 15 Lacs
Pune, Maharashtra, India
On-site
REQUIRED SKILLS & QUALIFICATIONS TECHNICAL SKILLS: Cloud & Data Lake: Azure Data Lake (ADLS Gen2), Databricks, Delta Lake, Iceberg Reporting tools: PowerBI, Tableau or similar toolset Streaming & Messaging: Confluent Kafka, Apache Flink, Azure Event Hubs Big Data Processing: Apache Spark, Databricks, Flink SQL, Delta Live Tables Programming: Python (PySpark, Pandas), SQL Storage & Formats: Parquet, Avro, ORC, JSON Data Modeling: Dimensional modeling, Data Vault, Lakehouse architecture MINIMUM QUALIFICATIONS 8 + years of end-to-end design and architecture of enterprise level data platform and reporting/analytical solutions. 5+ years of expertise in real-time and batch reporting, analytical solution architecture. 4+ years of experience with PowerBI, Tableau or similar technology solutions 3+ years of experience with design and architecture with big data solution. 3+ years of hands-on experience in enterprise level streaming data solution with Python, Kafka/Flink and Iceberg. ADDITIONAL QUALIFICATIONS 8 + years of experience with Dimensional modeling and data lake design methodologies. 8+ years of experience with Relational and Non-relational databases (e.g. SQL Server, Cosmos, etc.) 3 + years of experience with readiness, provisioning, security, and best practices with Azure data platform and orchestration with Data Factory. Experience with working with business stakeholders, requirements & use case analysis. Strong communication and collaboration skills with creative problem-solving skills. PREFERRED QUALIFICATIONS Bachelors degree in computer science or equivalent work experience. Experience with Agile/Scrum methodology. Experience with tax and accounting domain a plus. Azure Data Engineer certification a plus.
Posted 1 month ago
3.0 - 4.0 years
3 - 4 Lacs
Hyderabad, Telangana, India
On-site
Big Data Engineer / Infrastructure Developer Liaising with coworkers and clients to elucidate the requirements for each task Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed Reformulating existing frameworks to optimize their functioning Testing such structures to ensure that they are fit for use Preparing raw data for manipulation by data scientists Detecting and correcting errors in your work Ensuring that your work remains backed up and readily accessible to relevant coworkers Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs Experience with: Azure ADLS Apache Parquet Iceberg Kubeflow Airflow
Posted 1 month ago
13.0 - 20.0 years
40 - 45 Lacs
Bengaluru
Work from Office
Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough