Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Wipro Limited is a prominent technology services and consulting company committed to developing innovative solutions to meet the intricate digital transformation requirements of its clients. With a comprehensive range of capabilities in consulting, design, engineering, and operations, Wipro assists clients in achieving their ambitious goals and establishing sustainable, future-ready businesses. The company, with over 230,000 employees and business partners spread across 65 countries, is dedicated to supporting customers, colleagues, and communities in navigating through a constantly evolving world. For more information, please visit www.wipro.com. **Job Title: Hadoop Admin** **Primary Skills:** - Proficiency in Hadoop Admin, HDFS, Unix/Linux, and knowledge of any scripting language. **Secondary Skills:** - Familiarity with Machine Learning Infrastructure and AI technologies. **Experience:** - Minimum 7 years of relevant experience. **Key Responsibilities:** 1. **Understanding Requirements and Software Design:** - Analyze the product/software requirements and contribute to the design process. - Develop software solutions by studying information needs, systems flow, data usage, and work processes. - Conduct root cause analysis of system issues and propose solutions to enhance system performance. - Collaborate with project managers and functional teams to ensure alignment with software capabilities. 2. **Coding and Software Development:** - Code and develop software/modules while ensuring operational feasibility and optimal performance. - Create and automate processes for software validation through designing and executing test cases. - Modify software to address errors, adapt to new hardware, improve performance, or upgrade interfaces. - Prepare detailed reports on programming project specifications, activities, and status. 3. **Status Reporting and Customer Focus:** - Maintain ongoing communication with clients to capture requirements and feedback for quality work. - Participate in continuous education and training to stay updated on industry best practices. - Document solutions effectively and ensure clear communication with stakeholders. **Deliverables:** 1. **Continuous Integration, Deployment & Monitoring of Software:** - Ensure error-free onboarding and implementation of software. - Monitor throughput % and adhere to the release plan schedule. 2. **Quality & CSAT:** - Deliver projects on time and manage software effectively. - Troubleshoot queries and enhance customer experience. 3. **MIS & Reporting:** - Generate MIS reports on time and maintain accurate documentation. **Mandatory Skills:** - Proficiency in Hadoop Admin. **Experience Required:** - 5-8 Years of relevant experience. Wipro envisions a modern approach to digital transformation, seeking individuals driven by reinvention and continuous evolution. Join Wipro to embark on a journey of self-improvement and professional growth. Applications from candidates with disabilities are encouraged and welcomed.,
Posted 2 weeks ago
15.0 - 17.0 years
0 Lacs
pune, maharashtra, india
On-site
Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Associate Director - Software Engineering In this role, you will be involved in Software Development, Kubernetes / Google Cloud Platform (GCP) Data Pipeline, MLOps, DevOps and Team Leadership. The ideal candidate will play a pivotal role in designing, implementing, and optimizing scalable solutions while mentoring and grooming the team to achieve technical excellence. To be successful in this role, you should meet the following requirements: Over 15+ years in software development, preferably with Trade data. Hands-on expertise with Java 1.8+, Apache Spark (2.3/3.x), Hadoop (Spark/HDFS/Yarn), GCP, Elastic Search, RDBMS, SQL, Unix scripting, and ETL processes. Do code regularly, leads technical discussions, aligns with business objectives, and takes ownership of all technical aspects of the platform AND stay updated with relevant technologies, patterns, and tools. Skilled in designing data frame objects, optimizing memory usage, and understanding database/file system write operations. Strong background in system and solution architecture, including cluster management for Spark workloads. Familiarity with microservices architecture, API-centric systems, and Spring Boot (4+), including reactive programming. Practical knowledge of cloud deployments, especially GCP or similar providers, and cloud infrastructure optimization. Knowledgeable in big data concepts, DevOps methodologies, and containerization (Docker, Kubernetes). Skilled in using Bitbucket/GitHub, Jenkins, and similar CI/CD tools designs and maintenance of CI/CD pipelines. Provides mentorship, technical guidance, and code reviews for team members establishes frameworks for junior developers. Build relationship with other technical leads and principal engineers, promotes a collaborative, innovative, and growth-oriented team culture conducts performance evaluations, delivers feedback Prepares detailed technical designs based on functional requirements and manages technical tasks/tickets. Engages with business analysts, product owners, and other technical teams for requirement clarification and integration. Principal responsibilities Leading Architecture design for items in alignment to Future State Architecture Establish, document, and implement best practices for end-to-end application initiation and deployment processes. Drive continuous improvement initiatives to enhance customer satisfaction. Demonstrate flexibility and adaptability according to project requirements. Attend and actively participate in relevant project meetings. System Performance to ensure deliverables satisfy Non-Functional requirements Industrialisation to ensure robust solutions are being developed and tech debt reduced Innovation to ensure that we are continually improving and benefitting from industry advancements Ensuring that assigned work packages (EPIC, Story, Sub-Tasks) aligns with definition of ready and definition of done Ensuring high quality Testing Automation (e.g. Unit, Functional) in place at meets agreed level for delivered outputs Technical excellence influence the pod to deliver technically excellent solutions The technical backlog is also in areas of interest and responsibilities for Tech Lead position. Tech Lead sets standards. Ensures principles like DRY, SOLID, and Clean Code. Ensures code quality, security, and scalability. Requirements Requirements Must have: Degree in Computer Science, Engineering, or a closely related discipline (Bachelor's or Master's). Over 15+ years of expertise in software engineering, and cloud platforms, particularly Google Cloud Platform (GCP). Deep knowledge of DevOps technologies such as Jenkins, GitLab CI/CD, Terraform, Kubernetes, and Docker. Practical experience with version control, automation, and orchestration tools like GIT, Jenkins, Ansible/Puppet, and Kubernetes. Advanced coding abilities in languages like Python, Java. Strong grasp of data engineering, pipeline architecture, and ETL methodologies. Excellent verbal and written communication, with strong interpersonal skills. Well-versed in DevOps strategies, containerization. Experienced with continuous integration and deployment tools (e.g., Jenkins, GitLab CI). Knowledgeable about cloud infrastructure and infrastructure-as-code concepts. Adopt at handling multiple tasks, prioritizing, and collaborating across teams to achieve results. Collaborative team member, able to work across functions and engage with domain experts. Comfortable working with international teams and diverse cultures, with strong communication skills. Good to have: Surveillance in General or Trade Surveillance Domain knowledge Experience with other cloud platforms (AWS, Azure) is a plus. Familiarity with monitoring tools like Prometheus, Grafana, or Stackdriver. Knowledge of data governance and compliance frameworks. Certifications in GCP (e.g., Professional Data Engineer, Professional Cloud Architect). Experienced in working with resources in geographically dispersed teams, appreciating and respecting local cultures You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India
Posted 2 weeks ago
6.0 - 10.0 years
10 - 15 Lacs
hyderabad, chennai, bengaluru
Work from Office
Role & responsibilities At least 5+ years of experience and strong knowledge in Scala programming language. Able to write clean, maintainable and efficient Scala code following best practices. Good knowledge on the fundamental Data Structures and their usage At least 5+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. Having expertise in Spark Core, Spark SQL and Spark Streaming. Experience with Hadoop, HDFS, Hive and other BigData technologies. Familiarity with Data warehousing and ETL concepts and techniques Having expertise in Database concepts and SQL/NoSQL operations. UNIX shell scripting will be an added advantage in scheduling/running application jobs. At least 8 years of experience in Project development life cycle activities and maintenance/support projects. Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives. Understand project requirements and translate them into technical solutions which meets the project quality standards Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues. Strong problem solving and Good Analytical skills. Excellent verbal and written communication skills. Experience and desire to work in a Global delivery environment. Stay up to date with new technologies and industry trends in Development.
Posted 2 weeks ago
15.0 - 19.0 years
0 Lacs
pune, maharashtra
On-site
As a Director of Engineering at Citi's Banking Technology division in India, you will play a pivotal role in overseeing the strategic Big Data and Reporting initiatives while leading and ensuring the growth and operational excellence of the engineering site in India. You will be responsible for driving the delivery of large-scale big data platforms and sophisticated reporting solutions for the banking sector by leveraging extensive datasets for insights, risk management, fraud detection, and regulatory compliance. Your role will involve defining and executing the technical roadmap for big data and reporting, aligning it with overall business objectives and industry best practices. Collaborating closely with product owners, business stakeholders, and other engineering leaders, you will translate business requirements into technical solutions to ensure successful delivery. As a seasoned technology leader, you will be instrumental in shaping the technology roadmap, enhancing developer productivity, and embedding next-generation technologies like Generative AI into core processes and products. You will champion new technologies and approaches to enhance data processing, analytics, and reporting capabilities, particularly focusing on real-time analytics and predictive modeling. In addition to the strategic leadership and delivery aspect, you will also lead the India engineering site, providing strategic leadership and operational oversight for the entire engineering organization. This includes attracting, developing, and retaining top engineering talent, building high-performing and engaged teams, and mentoring and coaching engineering managers and individual contributors. You will champion continuous process improvements across the software development lifecycle to enhance developer productivity, streamline workflows, and optimize resource allocation. Furthermore, your role will involve driving the adoption and maturity of DevOps practices and extensive automation across all engineering functions, ensuring high-quality, secure, and rapid software delivery. You will also ensure the delivery of high-quality, scalable, resilient, and secure software solutions that meet stringent banking industry standards and regulatory requirements. Additionally, you will strategically integrate Generative AI tools and methodologies into the software development lifecycle to enhance efficiency, automate code generation, improve testing, and accelerate prototyping. To be successful in this role, you should have at least 15 years of experience in software engineering, with a minimum of 5 years in a leadership role managing large, multi-functional engineering teams. Extensive experience in the banking or financial services technology sector in India is required, along with a proven track record of successfully delivering large-scale big data and reporting initiatives. Proficiency in Big Data technologies, Tableau, Java, Angular, and experience with DevOps practices are essential. Strong leadership and people management skills, exceptional problem-solving abilities, and excellent communication skills are also crucial for this role. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is required to qualify for this position.,
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Job Requirements Job Requirements Role/ Job Title: Data Engineer Function/ Department: Data & Analytics Job Purpose The Data Engineer is responsible for developing day to day data engineering projects and aiding in building the business data collection systems and processing pipelines. Responsible for efficient code for building and maintaining optimized and highly available data pipelines that facilitate deeper analysis and reporting by the Data and Analytics department. He/She works with Data and Analytics teams in leveraging data with reporting and scientific tools, for example, Tableau, Python, and Spark. The Data Engineer strives to continuously develop most efficient code for data pipeline or use case needs Roles & Responsibilities Primary Responsibilities: Minimum 2-3 years of Data Engineering experience Proven experience in in SQL, Spark, Hadoop ecosystem Have worked on multiple TBsof data volume from ingestion to consumption. Work with business stakeholders to identify and document high impact business problems and potential solutions. Good understanding of Data Lake/Lakehouse architecture and experience/exposure to Hadoop (Cloudera, Hortonworks) and/or AWS Work on end-to-end data lifecycle from Data Ingestion, Data Transformation and Data Consumption layer. Versed with API and its usability. A suitable candidate will also be proficient in Spark, Spark Streaming, hive, SQLs A suitable candidate will also demonstrate experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, Oozie, etc. The candidate will additionally demonstrate substantial experience and a deep knowledge of relational databases. Good skills in technical debugging of the code in case of issues. Also, working with git for code versioning Creating Technical Design Documentation of the projects/pipelines Secondary Responsibilities Ability to work independently and handle your own development effort. Excellent oral and written communication skills Learn and use internally available analytic technologies. Identify key performance indicators and create educational/deliverables path to achieve the same. Use educational background in data engineering and perform data mining analysis. Work with BI analysts/engineers to create prototypes. Engage in the delivery and presentation of solutions. Education Qualification Graduation: B.E/B.Tech Post-graduation: M.Tech Experience: 2 to 5 years of relevant experience in sales Show more Show less
Posted 3 weeks ago
8.0 - 11.0 years
6 - 16 Lacs
hyderabad, chennai, bengaluru
Work from Office
We are seeking an experienced Software Engineer with deep expertise in Scala programming and Big Data technologies to design, develop, and maintain large-scale distributed data processing systems. The ideal candidate will be a hands-on developer with a strong understanding of data pipelines, Spark ecosystem, and related technologies, capable of delivering clean, efficient, and scalable code in an Agile environment. Key Responsibilities Develop and maintain scalable, efficient, and robust data processing pipelines using Scala and Apache Spark (Spark Core, Spark SQL, Spark Streaming). Write clean, maintainable, and well-documented Scala code following industry best practices and coding standards. Design and implement batch and real-time data processing workflows handling large volumes of data. Work closely with cross-functional teams to understand business requirements and translate them into technical solutions that meet quality standards. Utilize Hadoop ecosystem components such as HDFS, Hive, Sqoop, Impala, and related tools to support data storage and retrieval needs. Develop and optimize ETL processes and data warehousing solutions leveraging Big Data technologies. Apply deep knowledge of Data Structures and algorithms to ensure efficient data processing and system performance. Conduct unit testing, code reviews, and performance tuning of data processing jobs. Automate application job scheduling and execution using UNIX shell scripting (advantageous). Participate actively in Agile development processes including daily standups, sprint planning, reviews, and retrospectives. Collaborate effectively with upstream and downstream teams to identify, troubleshoot, and resolve data pipeline issues. Stay current with emerging technologies, frameworks, and industry trends to continuously improve the architecture and implementation of data solutions. Support production environments by handling incidents, root cause analysis, and continuous improvements. Required Skills & Experience Minimum 8 years of professional software development experience with strong emphasis on Scala programming. Extensive experience designing and building distributed data processing pipelines using Apache Spark (Spark Core, Spark SQL, Spark Streaming). Strong understanding of Hadoop ecosystem technologies including HDFS, Hive, Sqoop, Impala , and related tools. Proficient in SQL and NoSQL databases with sound knowledge of database concepts and operations. Familiarity with Data Warehousing concepts and ETL methodologies. Solid foundation in Data Structures, Algorithms, and Object-Oriented Programming. Experience in UNIX/Linux shell scripting to manage and schedule data jobs (preferred). Proven track record of working in Agile software development environments. Excellent problem-solving skills, with the ability to analyze complex issues and provide efficient solutions. Strong verbal and written communication skills, with experience working in diverse, global delivery teams. Ability to manage multiple tasks, collaborate across teams, and adapt to changing priorities. Desired Qualifications Bachelors or Master’s degree in Computer Science, Engineering, or a related technical field. Previous experience working in a global delivery or distributed team environment. Certification or formal training in Big Data technologies or Scala programming is a plus.
Posted 3 weeks ago
10.0 - 12.0 years
35 - 40 Lacs
pune, india
Work from Office
USEReady helps businesses to be self-reliant on data. Growing over 3000% since inception, USEReady achieved #113 rank in Inc 500 2015 and honoured top 100 companies in North America 2015 by Red Herring. USEReady is built on strong entrepreneurial spirit with unprecedented opportunities for career growth. At USEReady, we believe in achieving career growth while improving our individual competencies. If you desire to be part of a team that believes in mutual success and inspiration, you are welcome to apply. USEReady is a data and analytics firm that provides the strategies, tools, capability, and capacity that businesses need to turn their data into a competitive advantage. USEReady partners with cloud and data ecosystem leaders like Tableau, Salesforce, Snowflake and Amazon Web Services, and has been named Tableau partner of the year multiple times. We have been nominated and won several awards along this journey. Check us out at www.useready.com The ideal candidate will be a specialist in Data Entitlement using Apache Ranger within Starburst. The consultant will be responsible for designing the end-to-end solution, ensuring high performance through mechanisms like materialised views , and implementing fine-grained access control to guarantee data security. This is a hands-on architectural role that will directly impact how our organization accesses and consumes data. Key Responsibilities Design and implement a scalable data architecture using Starburst as the central query and virtualisation layer. Take full ownership of data security by designing and implementing fine-grained access control policies in Starburst using Apache Ranger . This includes row-level filtering, column masking, and tag-based policies. Create a unified semantic layer by virtualizing data from various Oracle schemas and other potential data sources, providing a single point of access for business users. Develop and manage materialised views within Starburst to accelerate query performance and ensure a seamless, interactive experience for Tableau users. Leverage your experience with Oracle to effectively connect, query, and model data within the Starburst ecosystem. Work directly with business stakeholders, and Oracle DBAs to understand requirements, translate them into technical solutions, and ensure successful project delivery Note* Role is remote but it is preferable if candidate is in Pune so can visit client office if needed.
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
bengaluru
Work from Office
7+ years of data warehousing/engineering, software solutions design and development experience. • Experience in designing and architecting distributed data systems • Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. • Work with other Bigdata developers to make sure that all data solutions are consistent. • Partner with business community to understand requirements, determine training needs and deliver user training sessions • Perform technology and product research to better define requirements, resolve important issues and improve the overall capability of the analytics technology stack. • Evaluate and provides feedback on future technologies and new releases/upgrades. Job Specific Knowledge: • Supports Big Data and batch/real-time analytical solutions leveraging transformational technologies. Works on multiple projects as a technical team member or drive user requirement analysis and elaboration, design and development of software applications, testing, and builds automation tools. Participates in creating strategies that use business intelligence and data platforms. Skills: • Comfortable programming in, and debugging, in: Python • Have built solutions with public cloud providers such as AWS, Azure, or GCP • Expertise in : o Data Engineering technologies (Ex: Spark, Hadoop, Kafka) • Ability to Research and incubate new technologies and frameworks • Experience with agile or other rapid application development methodologies and tools like bitbucket, jira, confluence. • An ability to work in a fast-paced environment where continuous innovation is desired, and ambiguity is the norm. • A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches. • Experience collaborating with global teams • Ability to work independently and collaborative with other staff members.
Posted 3 weeks ago
7.0 - 12.0 years
9 - 15 Lacs
bengaluru
Work from Office
We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django .
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform Engineer - Tech Lead at Deutsche Bank in Pune, India, you will be part of the DB Technology global team of tech specialists. Your role involves leading a group of engineers working on cutting-edge technologies in Hadoop, Big Data, GCP, Terraform, Big Query, Data Proc, and data management to develop robust data pipelines, ensure data quality, and implement efficient data management solutions. Your leadership will drive innovation, maintain high standards in data infrastructure, and mentor team members to support data-driven initiatives. You will collaborate with data engineers, analysts, cross-functional teams, and stakeholders to ensure the data platform meets the organization's needs. Your responsibilities include working on a hybrid data platform to unlock new insights and drive business growth. You will contribute to all stages of software delivery, from initial analysis to production support, within a cross-functional agile delivery team. Key Responsibilities: - Lead a cross-functional team in designing, developing, and implementing on-prem and cloud-based data solutions. - Provide technical guidance and mentorship to foster continuous learning and improvement. - Collaborate with product management and stakeholders to define technical requirements and establish delivery priorities. - Architect and implement scalable, efficient, and reliable data management solutions for complex data workflows and analytics. - Evaluate tools, technologies, and best practices to enhance the data platform. - Drive adoption of microservices, containerization, and serverless architectures. - Establish and enforce best practices in coding, testing, and deployment. - Oversee code reviews and provide feedback to promote code quality and team growth. Skills and Experience: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 7+ years of software engineering experience with a focus on Big Data and GCP technologies. - Strong leadership skills with experience in mentorship and team growth. - Expertise in designing and implementing data pipelines, ETL processes, and real-time data processing. - Hands-on experience with Hadoop ecosystem tools and Google Cloud Platform services. - Understanding of data quality management and best practices. - Familiarity with containerization and orchestration tools. - Strong problem-solving and communication skills. Deutsche Bank offers a culture of continuous learning, training, and development to support your career progression. You will receive coaching and support from experts in your team and benefit from a range of flexible benefits tailored to your needs. Join us in creating innovative solutions and driving business growth at Deutsche Bank.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have a Bachelors Degree in Computer Science, Computer Engineering or related technical field, while a Masters Degree or other advanced degree is preferred. With 4-6+ years of total experience, you should possess at least 2+ years of relevant experience in Big Data platforms. Your skill set should include strong analytical, problem solving, and communication/articulation skills. Furthermore, you are expected to have 3+ years of experience with big data and the Hadoop ecosystem, including Spark, HDFS, Hive, Sqoop, Hudi, Parquet, Apache Nifi, and Kafka. Proficiency in Scala/Spark is required, and knowledge of Python is considered a plus. Hands-on experience with Oracle and MS-SQL databases is essential. In addition, you should have experience working with job schedulers like CA or AutoSys, as well as familiarity with source code control systems such as Git, Jenkins, and Artifactory. Experience with platforms like Tableau and AtScale will be an advantage in this role.,
Posted 4 weeks ago
8.0 - 13.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Sr Programmer Analyst will work in the Information Management team of Services Technologies, focusing on Big Data and public cloud adoption projects. This intermediate-level position involves participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The primary objective is to contribute to applications systems analysis and programming activities. Responsibilities: - Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas. - Monitor and control all phases of the development process, including analysis, design, construction, testing, and implementation. Provide user and operational support on applications to business users. - Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business process, system process, and industry standards, and make evaluative judgments. - Recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. - Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. - Ensure essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. - Operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as an SME to senior stakeholders and/or other team members. Qualifications: - 8-13 years of work experience with Big Data technologies such as Spark (Scala/Python), Kafka streaming, Hadoop, HDFS, and a solid understanding of Big Data architecture. - Strong exposure to SQL, hands-on experience on Web API, good understanding of data file formats like Impala, Hadoop, Parquet, Avro, Iceberg, etc. - Experience with web services with Kubernetes, and Version control/CI/CD processes with git, Jenkins, harness, etc. - Public cloud experience is preferred, preferably AWS. - Strong data analysis skills and the ability to analyze data for business reporting purposes. - Experience working in an agile environment with fast-paced changing requirements, excellent planning and organizational skills, strong communication skills. - Experience in systems analysis and programming of software applications, managing and implementing successful projects, working knowledge of consulting/project management techniques/methods, ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements. Education: - Bachelor's degree/University degree or equivalent experience. *This job description provides a high-level overview of the types of work performed. Other job-related duties may be assigned as required.*,
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
As a software developer, you will be working in a constantly evolving environment driven by technological advances and the strategic direction of the organization you are employed by. Your primary responsibilities will include creating, maintaining, auditing, and enhancing systems to meet specific needs, often based on recommendations from systems analysts or architects. You will be tasked with testing both hardware and software systems to identify and resolve system faults. Additionally, you will be involved in writing diagnostic programs and designing and developing code for operating systems and software to ensure optimal efficiency. In situations where necessary, you will also provide recommendations for future developments. Joining us offers numerous benefits, including the opportunity to work on challenging projects and solve complex technical problems. You can expect rapid career growth and the chance to assume leadership roles. Our mentorship program allows you to learn from experienced mentors and industry experts, while our global opportunities enable you to collaborate with clients from around the world and gain international experience. We offer competitive compensation packages and benefits to our employees. If you are passionate about technology and interested in working on innovative projects with a skilled team, pursuing a career as an Infosys Power Programmer could be an excellent choice for you. To be considered for this role, you must possess the following mandatory skills: - Proficiency in AWS Glue, AWS Redshift/Spectrum, S3, API Gateway, Athena, Step, and Lambda functions. - Experience with Extract Transform Load (ETL) and Extract Load & Transform (ELT) data integration patterns. - Expertise in designing and constructing data pipelines. - Development experience in one or more object-oriented programming languages, preferably Python. In terms of job specifications, we are looking for candidates who meet the following criteria: - At least 5 years of hands-on experience in developing, testing, deploying, and debugging Spark Jobs using Scala in the Hadoop Platform. - Profound knowledge of Spark Core and working with RDDs and Spark SQL. - Familiarity with Spark Optimization Techniques and Best Practices. - Strong understanding of Scala Functional Programming concepts like Try, Option, Future, and Collections. - Proficiency in Scala Object-Oriented Programming covering Classes, Traits, Objects (Singleton and Companion), and Case Classes. - Sound knowledge of Scala Language Features including the Type System and Implicit/Givens. - Hands-on experience working in the Hadoop Environment (HDFS/Hive), AWS S3, EMR. - Proficiency in Python programming. - Working experience with Workflow Orchestration tools such as Airflow and Oozie. - Experience with API calls in Scala. - Familiarity and exposure to file formats like Apache AVRO, Parquet, and JSON. - Desirable knowledge of Protocol Buffers and Geospatial data analytics. - Ability to write test cases using frameworks like scalatest. - Good understanding of Build Tools such as Gradle & SBT. - Experience using GIT, resolving conflicts, and working with branches. - Preferred experience in workflow systems like Airflow. - Strong programming skills focusing on data structures and algorithms. - Excellent analytical and communication skills. Candidates applying for this position should have: - 7-10 years of industry experience. - A BE/B.Tech in Computer Science or an equivalent qualification.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
As a Hadoop Admin, you will be responsible for managing and supporting Hadoop clusters and various components such as HDFS, HBase, Hive, Sentry, Hue, Yarn, Sqoop, Spark, Oozie, ZooKeeper, Flume, and Solr. With a minimum of 4 years of experience in Hadoop administration, you will play a crucial role in installing, configuring, maintaining, troubleshooting, and monitoring these clusters to ensure their efficient functioning in production support projects. Your primary duties will include integrating analytical tools like Datameer, Paxata, DataRobot, H2O, MRS, Python, R-Studio, SAS, and Dataiku-Bluedata with Hadoop, along with conducting job level troubleshooting for components such as Yarn, Impala, and others. Proficiency in Unix/Linux and scripting is essential for this role, and you should also have experience with tools like Talend, MySQL Galera, Pepperdata, Autowatch, Netbackup, Solix, UDeploy, and RLM. Additionally, you will be tasked with troubleshooting application issues across various environments and operating platforms to ensure smooth operations. The ideal candidate for this position should have 4 to 6 years of relevant experience, strong knowledge of Hadoop administration, and the ability to excel in a fast-paced and dynamic work environment. Our hiring process consists of screening conducted by the HR team, followed by two technical rounds, and culminating in a final HR round. If you are passionate about Big Data and possess the required skills and experience for this role, we invite you to join our team as a Hadoop Admin and contribute to our exciting projects in Gurgaon, Bangalore, and Hyderabad.,
Posted 1 month ago
8.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Sr Programmer Analyst will work in Information management team of Services Technologies, on projects focusing Big Data and public cloud adoption. It is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas. Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users. Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement. Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality. Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. Ensure essential procedures are followed and help define operating standards and processes. Serve as advisor or coach to new or lower level analysts. Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Qualifications: 8-13 years of work experience with Big Data technologies, such as Spark (Scala/Python), Kafka streaming, Hadoop, HDFS, and solid understanding of Big Data architecture. Strong exposure on SQL, Hands-on experience of Web API. Good understanding of data file formats, Impala, Hadoop, Parquet, Avro, Iceberg, etc. Experience with web services with Kubernetes, and Version control/CI/CD processes with git, Jenkins, harness, etc. Public cloud experience is preferred, preferably AWS. Strong data analysis skills and the ability to slice and dice the data as needed for business reporting. Experience working in an agile environment with fast-paced changing requirements. Excellent planning and organizational skills. Strong Communication skills. Experience in systems analysis and programming of software applications. Education: Bachelors degree/University degree or equivalent experience. This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a Hadoop Developer with 6+ years of experience and a notice period of immediate to 15 days, you will be responsible for hands-on experience in Hadoop, working with HDFS (Hadoop Distributed File System), and utilizing Python and Spark. Your role will involve designing and developing scalable and robust Hadoop-based data pipelines. This position is based in Hyderabad with a general shift schedule. You should possess key skills in Hadoop, Python, HDFS, and Spark to excel in this role. A Bachelor's degree in a related field is required for this full-time, permanent position in the IT/Computers - Software industry. As a valued team member, you will contribute to the growth and success of the organization. If you are seeking a challenging opportunity to leverage your expertise in Hadoop development, this position offers an exciting platform to showcase your skills. Join our team and be a part of innovative projects that drive business success. Job Code: GO/JC/688/2025 Recruiter Name: Sheena Rakesh,
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You have a great opportunity to join as a Data Software Engineer with 5-12 years of experience in Big Data & Data related technology. We are looking for candidates with an expert level understanding of distributed computing principles and hands-on experience in Apache Spark along with proficiency in Python. You should also have experience with technologies like Hadoop, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, Hive, Impala, and integration of data from various sources such as RDBMS, ERP, and Files. Additionally, knowledge of NoSQL databases, ETL techniques, SQL queries, joins, stored procedures, relational schemas, and performance tuning of Spark Jobs is required. Moreover, you must have experience with native Cloud data services like AZURE Databricks and the ability to lead a team efficiently. Familiarity with AGILE methodology and designing/implementing Big data solutions would be an added advantage. This full-time position is based in Hyderabad and requires candidates who are available for face-to-face interactions. If you meet these requirements and are passionate about working with cutting-edge technologies in the field of Big Data, we would love to hear from you.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
About the job: At Citi, we're not just building technology, we're building the future of banking. Encompassing a broad range of specialties, roles, and cultures, our teams are creating innovations used across the globe. Citi is constantly growing and progressing through our technology, with a laser focus on evolving the ways of doing things. As one of the world's most global banks, we're changing how the world does business. Shape your Career with Citi We're currently looking for a high-caliber professional to join our team as AVP- Data Engineer based in Pune, India. Being part of our team means that we'll provide you with the resources to meet your unique needs, empower you to make healthy decisions, and manage your financial well-being to help plan for your future. For instance: - We provide programs and services for your physical and mental well-being, including access to telehealth options, health advocates, confidential counseling, and more. Coverage varies by country. - We empower our employees to manage their financial well-being and help them plan for the future. - We provide access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. In this role, you're expected to: Responsibilities: - Data Pipeline Development, Design & Automation: - Design and implement efficient database structures to ensure optimal performance and support analytics. - Design, implement, and optimize secure data pipelines to ingest, process, and store large volumes of structured and unstructured data from diverse sources, including vulnerability scans, security tools, and assessments. - Work closely with stakeholders to provide clean, structured datasets that enable advanced analytics and insights into cybersecurity risks, trends, and remediation activities. Technical Competencies: - 7+ years of Hands-on experience with Scala & Hands-on experience with Spark. - 10+ years of experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Spark with Scala. - Good experience in Big Data technologies (HDFS, Hive, Apache Spark, Spark-SQL, Spark Streaming, Spark jobs optimization & Kafka). - Good knowledge of Exposure to various file formats (JSON, AVRO, Parquet). - Knowledge of agile (scrum) development methodology is a plus. - Strong development/automation skills. - Right attitude to participate and contribute through all phases of Development Lifecycle. - Secondary Skillset: No SQL, Starburst, Python. - Optional: Java Spring, Kubernetes, Docker. Competencies (Soft skills): - Strong communication skills. - Candidate should be responsible for reporting to both business and technology senior management. - Need to work with stakeholders and keep them updated on developments, estimation, delivery, and issues. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Software Engineer at KG Invicta Services Pvt Ltd, you will leverage your 5-12 years of experience in Big Data & Data-related technologies to drive impactful solutions. Your expertise in distributed computing principles and Apache Spark, coupled with hands-on programming skills in Python, will be instrumental in designing and implementing efficient Big Data solutions. You will demonstrate proficiency in a variety of tools and technologies including Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, RabbitMQ, Hive, Impala, and NoSQL databases such as HBase, Cassandra, and MongoDB. Your ability to integrate data from diverse sources like RDBMS, ERP, and files, along with knowledge of ETL techniques and frameworks, will ensure seamless data processing and analysis. Performance tuning of Spark jobs, familiarity with Cloud data services like AWS and Azure Databricks, and the capability to lead a team effectively will be key aspects of your role. Your expertise in SQL queries, joins, stored procedures, and relational schemas will contribute to the optimization of data querying processes. Your experience with AGILE methodology and a deep understanding of Big Data querying tools will enable you to contribute significantly to the development and enhancement of stream-processing systems. You will collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. If you are passionate about leveraging data to drive innovation and possess a strong foundation in Spark, Python, and Cloud technologies, we invite you to join our team as a Data Software Engineer. This is a full-time position with a day shift schedule, and the work location is in person. Category: ML/AI Engineers, Data Scientist, Software Engineer, Data Engineer Expertise: Python (5 Years), AWS (3 Years), Apache Spark (5 Years), PySpark (3 Years), GCP (3 Years), Azure (3 Years), Apache Kafka (3 Years),
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Ops Capability Deployment Analyst at our organization, you will be a seasoned professional contributing to the development of new solutions, frameworks, and techniques while improving processes and workflow for the Enterprise Data function. Your role will involve integrating subject matter and industry expertise within a defined area, requiring an in-depth understanding of how different areas collectively integrate within the sub-function to contribute to the overall business objectives. Your primary responsibility will be to perform data analytics and analysis across various asset classes, as well as to establish data science and tooling capabilities within the team. You will collaborate closely with the wider Enterprise Data team, particularly the front-to-back leads, to deliver on business priorities effectively. Joining the B & I Data Capabilities team within the Enterprise Data, you will be involved in managing the Data quality/Metrics/Controls program and implementing improved data governance and data management practices throughout the region. The focus of the Data quality program will be on enhancing our approach to data risk and meeting regulatory commitments in this area. Key Responsibilities: - Utilize data engineering background and expertise in Distributed Data platforms and Cloud services. - Demonstrate a sound understanding of data architecture and integration with enterprise applications. - Research and assess new data technologies, data mesh architecture, and self-service data platforms. - Collaborate with the Enterprise Architecture Team to define and refine the overall data strategy. - Address performance bottlenecks, design batch orchestrations, and deliver Reporting capabilities. - Conduct complex data analytics on large datasets including data cleansing, transformation, joins, and aggregation. - Develop analytics dashboards and data science capabilities for Enterprise Data platforms. - Communicate findings and propose solutions to stakeholders effectively. - Translate business and functional requirements into technical design documents. - Collaborate with cross-functional teams such as Business Analysis, Product Assurance, Platforms and Infrastructure, Business Office, Control and Production Support. - Prepare handover documents and manage SIT, UAT, and Implementation processes. - Demonstrate a deep understanding of how the development function integrates within the overall business/technology to achieve objectives. - Perform other assigned duties as necessary. Skills & Qualifications: - 10+ years of active development background in Financial Services or Finance IT. - Experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools. - Hands-on experience with ETL using PySpark on distributed platforms, data ingestion, Spark optimization, and batch orchestration. - Proficiency in Hive, HDFS, Airflow, and job scheduler. - Strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy). - Ability to write complex SQL/Stored Procs. - Experience with DevOps, Jenkins/Lightspeed, Git, CoPilot. - Proficient in one or more BI visualization tools such as Tableau, PowerBI. - Proven experience in implementing Datalake/Datawarehouse for enterprise use cases. - Exposure to analytical tools and AI/ML is desired. Education: - Bachelor's/University degree, master's degree in information systems, Business Analysis, or Computer Science. If you are looking for a challenging opportunity where you can utilize your expertise in data analytics, data engineering, and data science, this role offers a dynamic environment where you can contribute to the growth and success of the Enterprise Data function within our organization.,
Posted 1 month ago
8.0 - 10.0 years
0 - 0 Lacs
bangalore
Remote
At least 8+ years of experience and strong knowledge in Scala programming language. Able to write clean, maintainable and efficient Scala code following best practices. Good knowledge on the fundamental Data Structures and their usage At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. Having expertise in Spark Core, Spark SQL and Spark Streaming. Experience with Hadoop, HDFS, Hive and other BigData technologies. Familiarity with Data warehousing and ETL concepts and techniques Having expertise in Database concepts and SQL/NoSQL operations. UNIX shell scripting will be an added advantage in scheduling/running application jobs. At least 8 years of experience in Project development life cycle activities and maintenance/support projects. Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives. Understand project requirements and translate them into technical solutions which meets the project quality standards Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues. Strong problem solving and Good Analytical skills. Excellent verbal and written communication skills. Experience and desire to work in a Global delivery environment. Stay up to date with new technologies and industry trends in Development.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Sr. Data Engineer at Lifesight in Bangalore, you will be responsible for building highly scalable, fault-tolerant distributed data processing systems that handle massive amounts of data ingested daily. You will work on processing petabyte-sized data warehouses and Elasticsearch clusters, optimizing data pipelines for quality and resilience, and refining diverse datasets into simplified models to encourage self-service. Your role will involve owning data mapping, business logic, transformations, and ensuring data quality through low-level systems debugging and performance optimization on large production clusters. Additionally, you will participate in architecture discussions, influence product roadmaps, and take ownership of new projects while maintaining and supporting existing platforms and transitioning to newer technology stacks. To excel in this role, you should have proficiency in Python and PySpark, a deep understanding of Apache Spark including tuning and data frame building, and the ability to create Java/Scala Spark jobs for data transformation and aggregation. Your experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, as well as distributed environments using tools like Kafka, Spark, Hive, and Hadoop, will be invaluable. Familiarity with distributed database systems, various file formats like Parquet and Avro, and NoSQL databases is essential, along with experience in cloud platforms like AWS and GCP. Ideally, you should have at least 5 years of professional experience as a data or software engineer. Joining Lifesight means being part of a fast-growing Marketing Measurement Platform with a global impact, where you can influence key decisions on tech stack, product development, and scalable solutions. You will work in small, agile teams within a non-bureaucratic, fast-paced environment that values innovation, collaboration, and personal well-being. Competitive compensation, benefits, and a culture of empowerment that prioritizes work-life balance and team camaraderie await you at Lifesight.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
telangana
On-site
You are a highly skilled and detail-oriented ETL QA - Technical Lead with a solid background in Big Data Testing, the Hadoop ecosystem, and SQL validation. Your primary responsibility will be leading end-to-end testing efforts for data/ETL pipelines across various big data platforms. You will be working closely with cross-functional teams in an Agile environment to ensure the quality and integrity of large-scale data solutions. Your key responsibilities include designing and implementing test strategies for validating large datasets, transformations, and integrations. You will be hands-on testing Hadoop-based data platforms such as HDFS, Hive, and Spark. Additionally, you will develop complex SQL queries for data validation and business rule testing. Collaborating with developers, product owners, and business analysts in Agile ceremonies will also be a crucial part of your role. As the ETL QA - Technical Lead, you will own test planning, test case design, defect tracking, and reporting for assigned modules. Identifying areas of automation and building reusable QA assets will be essential, along with driving QA best practices and mentoring junior QA team members. To excel in this role, you should have 7-11 years of experience in Software Testing, with a minimum of 3 years in Big Data/Hadoop testing. Strong hands-on experience in testing Hadoop components like HDFS, Hive, Spark, and Sqoop is required. Proficiency in SQL, especially in complex joins, aggregations, and data validation, is essential. Experience in ETL/Data Warehouse testing and familiarity with data ingestion, transformation, and validation techniques are also necessary.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
, India
On-site
ETL QA - Technical Lead Experience: 7 to 11 Years Job Locations: Hyderabad (1 position) | Gurgaon (1 position) Job Summary: We are looking for a highly skilled and detail-oriented ETL QA - Technical Lead with strong experience in Big Data Testing , Hadoop ecosystem , and SQL validation . The ideal candidate should have hands-on experience in test planning, execution, and automation in a data warehouse/ETL environment. You&aposll work closely with cross-functional teams in an Agile environment to ensure the quality and integrity of large-scale data solutions. Key Responsibilities: Lead end-to-end testing efforts for data/ETL pipelines across big data platforms Design and implement test strategies for validating large datasets, transformations, and integrations Perform hands-on testing of Hadoop-based data platforms (HDFS, Hive, Spark, etc.) Develop complex SQL queries for data validation and business rule testing Collaborate with developers, product owners, and business analysts in Agile ceremonies Own test planning, test case design, defect tracking, and reporting for assigned modules Identify areas of automation and build reusable QA assets Drive QA best practices and mentor junior QA team members Required Skills: 7-11 years of experience in Software Testing, with at least 3+ years in Big Data/Hadoop testing Strong hands-on experience in testing Hadoop components like HDFS, Hive, Spark, Sqoop, etc. Proficient in SQL (complex joins, aggregations, data validation) Experience in ETL/Data Warehouse testing Familiarity with data ingestion, transformation, and validation techniques Show more Show less
Posted 1 month ago
2.0 - 5.0 years
2 - 5 Lacs
Bengaluru, Karnataka, India
On-site
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Qualifications Your Skills & Experience: Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on Azure Bachelor's degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender-Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers.We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity.United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across truly value.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |