Home
Jobs

4958 Hadoop Jobs - Page 45

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

SQL Developer with SSIS (ETL Developer) Location: Hyderabad (Hybrid Model) Experience Required: 5+ Years Joining Timeline: Immediate to 20 Days Role Type: Individual Contributor (IC) Position Summary We are seeking a skilled SQL Developer with strong SSIS expertise to join a dynamic team supporting a leading US-based banking client. This is a hybrid role based in Hyderabad, suited for professionals experienced in building scalable, auditable ETL pipelines and collaborating within Agile teams. Must-Have Skills SkillProficiency SQL Development Expert in writing complex T-SQL queries, stored procedures, joins, and transactions. Proficient in handling error logging and audit logic for production-grade environments. ETL using SSIS Strong experience in designing, implementing, and debugging SSIS packages using components like script tasks, event handlers, and nested packages. Batch Integration Hands-on experience in managing high-volume batch data ingestion from various sources using SSIS, with performance and SLA considerations. Agile Delivery Actively contributed to Agile/Scrum teams, participated in sprint planning, code reviews, demos, and met sprint commitments. Stakeholder Collaboration Proficient in engaging with business/product owners for requirement gathering, transformation validation, and output review. Excellent communication skills required. Key Responsibilities Design and develop robust, auditable SSIS workflows based on business and data requirements. Ensure efficient deployment and maintenance using CI/CD tools like Jenkins or UCD. Collaborate with stakeholders to align solutions with business needs and data governance standards. Maintain and optimize SQL/SSIS packages for production environments ensuring traceability, performance, and error handling. Nice-to-Have Skills SkillDetail Cloud ETL (ADF) Exposure to Azure Data Factory or equivalent ETL tools. CI/CD (Jenkins/UCD) Familiar with DevOps deployment tools and pipelines. Big Data (Spark/Hadoop) Understanding or integration experience with big data systems. Other RDBMS (Oracle/Teradata) Experience in querying and integrating data from additional platforms. Apply here-sapna@helixbeat.com Show more Show less

Posted 1 week ago

Apply

7.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Databricks architect Interview Mode: Virtual Required Experience: 7-15 years Work location: Chennai, Kolkata, Hyderabad Must have: Hands on Experience in ADF, Azure Databricks, Pyspark, Azure Data Factory, Unity Catalog, Data migrations, Data Security Good to have - Spark SQL, Spark Streaming, Kafka Hands on in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena, AWS Data Catalog, Amazon Redshift, Amazon Athena, AWS RDS, AWS Glue, AWS EMR (Spark/Hadoop) CI/CD (Code Pipeline, Code Build) Good to have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through DM/E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification (Fulltime): Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract): If shortlisted, will you be available for a virtual interview on 13-Jun-25 (Friday)?: Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 8 - 12+ years Work Location: Hyderabad (Hybrid) Mandatory skills: Python, SQL, Snowflake Contract to Hire - 6+ months Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 8 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and related frameworks. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary Responsible for Building and maintaining high-performance data systems that enable deeper insights for all parts of our organization Responsible for Developing ETL/ELT pipelines for both batch and streaming data Responsible for Data flow for the real-time and analytics Improving data pipelines performance by implementing the industry’s best practices and different techniques for data parallel processing Responsible for the documentation, design, development and testing of Hadoop reporting and analytical application. Responsible for Technical discussion and finalization of the requirement by communicating effectively with Stakeholder. Responsible for converting functional requirements into the detailed technical design Responsible for adhering to SCRUM timelines and deliver accordingly Responsible for preparing the Unit/SIT/UAT test cases and log the results Responsible for Planning and tracking the implementation to closure Ability to drive enterprise-wide initiatives for usage of external data Envision enterprise-wide Entitlement’s platform and align it with Bank’s NextGen technology vision. Continually looking for process improvements Coordinate between various technical teams for various systems for smooth project execution starting from technical requirements discussion, overall architecture design, technical solution discussions, build, unit testing, regression testing, system integration testing, user acceptance testing, go live, user verification testing and rollback [if required] Prepare technical plan with clear milestone dates for technical tasks which will be input to the PM’s overall project plan. Coordinate with technical teams across technology on need basis who are not directly involved in the project example: Firewall network teams, DataPower teams, EDMP , OAM, OIM, ITSC , GIS teams etc. Responsible to support change management process Responsible to work alongside PSS teams and ensure proper KT sessions are provided to the support teams. Ensure to identify any risks within the project and get that recorded in Risk wise after discussion with business and manager. Ensure the project delivery is seamless with zero to negligible defects. Key Responsibilities Hands on experience with C++, .Net, SQL Language, jQuery, Web API & Service, Postgres SQL & MS SQL server, Azure Dev Ops & related, GitHub, ADO CI/CD Pipeline Should be transversal to handle Linux, PowerShell, Unix shell scripting, Kafka, Spark streaming Hadoop – Hive, Spark, Python, PYSpark Hands on experience of workflow/schedulers like NIFI/Ctrl-m Experience with Data loading tools like sqoop Experience and understanding of Object-oriented programming Motivation to learn innovative trade of programming, debugging, and deploying Self-starter, with excellent self-study skills and growth aspirations, capable of working without direction and able to deliver technical projects from scratch Excellent written and verbal communication skills. Flexible attitude, perform under pressure Ability to lead and influence direction and strategy of technology organization Test driven development, commitment to quality and a thorough approach to work A good team player with ability to meet tight deadlines in a fast-paced environment Guide junior’s developers and share the best practices Having Cloud certification will be an added advantage: any one of Azure/Aws/GCP Must have Knowledge & understanding of Agile principles Must have good understanding of project life cycle Must have Sound problem analysis and resolution abilities Good understanding of External & Internal Data Management & implications of Cloud usage in context of external data Strategy Develop the strategic direction and roadmap for CRES TTO, aligning with Business Strategy, ITO Strategy and investment priorities. Business Work hand in hand with Product Owners, Business Stakeholders, Squad Leads, CRES TTO partners taking product programs from investment decisions into design, specifications, solutioning, development, implementation and hand-over to operations, securing support and collaboration from other SCB teams Ensure delivery to business meeting time, cost and high quality constraints Support respective businesses in growing Return on investment, commercialisation of capabilities, bid teams, monitoring of usage, improving client experience, enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Governance Promote an environment where compliance with internal control functions and the external regulatory framework People & Talent Ability to work with other developers and assist junior team members. Identify training needs and take action to ensure company-wide compliance. Pursue continuing education on new solutions, technology, and skills. Problem solving with other team members in the project. Risk Management Interpreting briefs to create high-quality coding that functions according to specifications. Key stakeholders CRES Domain Clients Functions MT members, Operations and COO ITO engineering, build and run teams Architecture and Technology Support teams Supply Chain Management, Risk, Legal, Compliance and Audit teams External vendors Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Other Responsibilities Embed Here for good and Group’s brand and values in team Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures Multiple functions (double hats) Skills And Experience Technical Project Delivery (Agile & Classic) Vendor Management Stakeholder Management Qualifications 5+ years of lead development role Should have managed a team of minimum 5 members Should have delivered multiple projects end to end Experience in Property Technology products (eg. Lenel, CBRE, Milestone etc) Strong analytical, numerical and problem-solving skills Should be able to understand and communicate technical details of the project Good communication skills – oral and written. Very good exposure to technical projects Eg: server maintenance, system administrator or development or implementation experience Effective interpersonal, relational skills to be able to coach and develop the team to deliver their best Certified Scrum Master About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Senior Data Engineer with expertise in Databricks and Azure to join our team. As a Senior Data Engineer, you will be responsible for designing, developing and maintaining our data lakehouse and pipelines. You will work closely with the Data & Analytics teams to ensure efficient data flow and enable data-driven decision-making. The ideal candidate will have a strong background in data engineering, experience with Databricks, Azure Data Factory and other Azure services and a passion for working with large-scale data sets. Role Description Design, develop and maintain the solutions required for data processing, storage and retrieval. Create scalable, reliable and efficient data pipelines that enable data developers and engineers, data analysts and business stakeholders to access and analyze large volumes of data. Closely collaborates with other team members and Product Owner. Job Requirements Key Responsibilities Collaborate with the Product Owner, Business analyst and other team members to understand requirements and design scalable data pipelines and architectures. Build and maintain data ingestion, transformation and storage processes using Databricks and Azure services. Develop efficient ETL/ELT workflows to extract, transform and load data from various sources into data lakes. Design solutions and drive implementation for enhancing, improving and securing Data Lakehouse. Optimize and fine-tune data pipelines for performance, reliability and scalability. Implement data quality checks and monitoring to ensure data accuracy and integrity. Work with data developers, engineers and data analysts to provide them with the necessary data infrastructure and tools for analysis and reporting. Troubleshoot and resolve data-related issues, including performance bottlenecks and data inconsistencies. Stay up to date with the latest trends and technologies in data engineering and recommend improvements to existing systems and processes. Skillset Highly self-motivated, work Independently, assume ownership and results oriented. A desire and interest to stay up to date with the latest changes in Databricks, Azure and related data platform technologies. Time-management skills and the ability to establish reasonable and attainable deadlines for resolution . Strong programming skills in languages such as SQL, Python, Scala or Spark. Experience working with Databricks and Azure services, such as Azure Data Lake Storage, Azure Data Factory, Azure Databricks, Azure SQL Database and Azure Synapse Analytics. Proficiency in data modeling, database design and Spark SQL query optimization. Familiarity with big data technologies and frameworks like Hadoop, Spark and Hive. Familiarity with data governance and security best practices. Knowledge of data integration patterns and tools. Understanding of cloud computing concepts and distributed computing principles. Excellent problem-solving and analytical skills. Strong communication and collaboration skills to work effectively in an agile team environment. Ability to handle multiple tasks and prioritize work in a fast-paced and dynamic environment. Qualifications Bachelor's degree in Computer Science, Engineering or a related field. 4+ years of proven experience as a Data Engineer, with a focus on designing and building data pipelines. Experience in working with big and complex data environments. Certifications in Databricks or Azure services is a plus. Experience with data streaming technologies such as Apache Kafka or Azure Event Hubs is a plus. Company description Here at SoftwareOne, we give you the flexibility to unleash your creativity, without limits. We encourage autonomy and thinking outside the box - and we can’t wait to hear your new ideas., and although all businesses say it, we truly believe in work - life harmony. Our people are our greatest asset, and we’ll go the extra mile to ensure you’re happy here. We want our people to be their true authentic selves at all times, because that’s when real creativity happens. At SoftwareOne, we believe that our people are our greatest asset. We offer: A flexible work environment that encourages creativity and innovation. Opportunities for professional growth and development. An inclusive team culture where your ideas are valued and your contributions make a difference. The chance to work on ambitious projects that push the boundaries of technology. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Head of Application Development & Support Job Requisition: R0080416 No. of Vacancies: 1 Location: Pune Full time /Part Time: Full time Regular /Temporary: Regular SANDVIK COROMANT is the world’s leading supplier of tools, tooling solutions and know-how to the metalworking industry. With extensive investments in research and development we create unique innovations and set new productivity standards together with our customers. These include the world's major automotive, aerospace and energy industries. Sandvik Coromant has 8,000 employees and is represented in 130 countries. We are part of the business area Sandvik Machining Solutions within the global industrial group Sandvik. At Sandvik Coromant, we are driven by a passion for excellence in everything we do. Our belief is that sustainable success is a team effort and with our profound knowledge of metal cutting and insight into the varying challenges of different industries, we strive to develop innovative solutions in collaboration with our customers, to meet both current and future demands. We are seeking for people who are passionate in their work and possess the drive to excel to join us. Purpose: As a Head of Application Development and Support is a global role where you would be responsible for developing , managing and enhancing ‘digital solutions/applications curated by DIH members or your team members. You are responsible for driving end-to-end software/application delivery, ensuring the quality and speed of execution across web and mobile platforms. Leveraging and institutionalizing agile way of working, the Head of Application Development and Support will understand business logic, guide application development team and oversee the software/digital solutions development lifecycle. You will own and Implement industry best practices, and create sustainable development and support processes eventually leading application development team from India. Additionally, this role will focus on hiring, developing, and motivating talent while being a hands-on technical leader who can engage in detailed problem-solving. Main Responsibilities: Collaborate with stakeholders to define and execute software development goals, ensuring alignment with the company’s digital strategy Lead the timely and high-quality execution of the digital applications’ portfolio by leveraging internal and external resources Design user interfaces and implement front-end components using HTML, CSS, and JavaScript frameworks such as React or Angular. Develop server-side logic and database integration using languages such as Node.js, Python, or Java. Collaborate with designers, product managers, and other stakeholders to define project requirements and deliverables. Write clean, efficient, and maintainable code following industry best practices. Perform code reviews and provide constructive feedback to team members. Troubleshoot and debug issues reported by clients or internal stakeholders. Stay updated on emerging technologies and trends in web development Continuously refine and implement scalable processes for software development, deployment, and support Use structured frameworks like scrum methodologies to ensure cross-functional engagement and delivery accountability Work with agile development methodologies, adhering to best practices and pursuing continued learning opportunities Identify skill gaps and address them through targeted hiring, strategic partnerships, and upskilling initiatives Actively develop and motivate team members by providing real-time coaching, assigning developmental projects, and fostering career growth Ensure that global digital initiatives improve the customer experience and drive the adoption of digital solutions Collaborate effectively with cross-functional teams like Corporate IT, Cyber Security, Data and AI teams, and Digital platform product owners, Commercial, and Operational stakeholders, to deliver high-impact projects Act as a technical authority, providing guidance on architecture, design, and implementation Help with application feasibility analysis and building uses cases related to software development and test new digital applications/solutions, processes and operational changes that will improve productivity and end user experience Working with the team to develop intelligent dashboards, reporting, and analysis tools Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design Conduct usability testing and gather feedback from users to continuously improve the user experience Stay updated on the latest trends and technologies in software development, Full stack development, Database management, UI/UX design etc. Key Competencies: Master or bachelor’s degree in computer science, Software Engineering, mathematics or similar fields. 10 to 15 years of experience in leading and managing large and multi-disciplinary software /applications/digital solutions team in global setup. Hand-on experience in application/software development 5+ experience in managerial/team management role Experience of working in a cross functional team with global set up Experience in setting up agile way of working and mentoring team on agile/scrum methodology Experience in delivering multi-stack applications for different industry verticals Software Development: Understanding of various programming languages and software development methodologies Database Management: Understanding database systems to manage and organize digital assets effectively. SQL, Oracle Database Security: Understanding of cybersecurity principles to safeguard digital assets from threats and vulnerabilities Integration: Develop the ability to support integration of different systems and solutions within the catalogue to ensure interoperability. Basic understanding in data visualization, data modelling and data analysis (preferably Power Bi) Basic understanding in data engineering (non-drag and drop ETL, data wrangling, data quality, warehousing, etc.) Good understanding of software development project management tools such as DevOps, Jira, Kanban, Gantt Charts, Miro Good understanding of different phases of web applications such as concepts, development, testing, deployment and maintenance Conceptual knowledge on open source/open standards big data technologies, e.g.: Hadoop, Spark, Hive, HBase, Cassandra, Drill, Databricks, EMR/HDInsight, etc. Knowledge of streaming data technology and uses: Kafka/Kinesis, Confluent Platform, Flink, Samza, Spark Streaming, Druid, Elasticsearch, etc. would be an added advantage Stakeholder Management: Ability to communicate effectively with stakeholders, including developers, users, and management, to understand requirements and gather feedback Training and Support: Skill in providing training and support to users of the digital solutions within the catalogue Benefits: Sandvik offers a competitive total compensation package including comprehensive benefits. In addition, we provide opportunities for professional competence development and training, as well as opportunities for career advancement. How to apply: You may upload your updated profile in Workday against JR Number R0080416 through your login, no later than June 27, 2025 Or Please send your application by registering on our site www.sandvik.com/career and uploading your CV against the JR Number R0080416 by June 27, 2025. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Details 1Role -Senior Developer 2Required Technical Skill Set - Spark/Scala/Unix 3Desired Experience Range -5-8 years 4Location of Requirement - Pune Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) Minimum 4+ years of experience in development of Spark Scala Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc Experience in debugging the Spark code Working knowledge of basic UNIX commands and shell script Experience of Autosys, Gradle Good-to-Have Good analytical and debugging skills Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status Write clear and precise documentation / specification Work in an agile environment Create documentation and document all developed mappings SN Responsibility of / Expectations from the Role 1 Create Scala/Spark jobs for data transformation and aggregation 2 Produce unit tests for Spark transformations and helper methods 3 Write Scaladoc-style documentation with all code 4 Design data processing pipelines Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

THIS IS A LONG TERM CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER . Our large, Fortune client is ranked as one of the best companies to work with, in the world. The client fosters progressive culture, creativity, and a Flexible work environment. They use cutting-edge technologies to keep themselves ahead of the curve. Diversity in all aspects is respected. Integrity, experience, honesty, people, humanity, and passion for excellence are some other adjectives that define this global technology leader. Responsibilities : Contribute to the team’s vision and articulate strategies to have fundamental impact at our massive scale. You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs. Diagnose and solve complex problems in distributed systems, develop and document technical solutions and sequence work to make fast, iterative deliveries and improvements. Build and maintain high-performance, fault-tolerant, and scalable distributed systems that can handle our massive scale. Provide solid leadership within your very own problem space, through data-driven approach, robust software designs, and effective delegation. Participate in, or spearhead design reviews with peers and stakeholders to adopt what’s best suited amongst available technologies. • Review code developed by other developers and provided feedback to ensure best practices (e.g., checking code in, accuracy, testability, and efficiency) • Automate cloud infrastructure, services, and observability. • Develop CI/CD pipelines and testing automation (nice to have) • Establish and uphold best engineering practices through thorough code and design reviews and improved processes and tools. • Groom junior engineers through mentoring and delegation • Drive a culture of trust, respect, and inclusion within your team. Minimum Qualifications : Bachelor’s degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience Min 5 years of experience curating data and hands on experience working on ETL/ELT tools. Strong overall programming skills, able to write modular, maintainable code, preferably Python & SQL Strong Data warehousing concepts and SQL skills. Understanding of SQL, dimensional modelling, and at least one relational database Experience with AWS Exposure to Snowflake and ingesting data in it or exposure to similar tools Humble, collaborative, team player, willing to step up and support your colleagues. Effective communication, problem solving and interpersonal skills. Commit to grow deeper in the knowledge and understanding of how to improve our existing applications. Preferred Qualifications : Experience on following tools – DBT, Fivetran, Airflow Knowledge and experience in Spark, Hadoop 2.0, and its ecosystem. Experience with automation frameworks/tools like Git, Jenkins Primary Skills: Snowflake, Python, SQL, DBT Secondary Skills: Fivetran, Airflow,Git, Jenkins, AWS, SQL DBM Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Required Skills Proficiency in OOPS concepts at an expert level. Strong command of Core Java. Extensive knowledge of one or more frameworks like Spring (MVC, Boot, Data/JPA, Cloud). Full-stack web development expertise including HTML, JavaScript, JQuery, CSS/Less, Angular or React. Proficient use of Git, Github/Bitbucket, or similar SCM tools. Exceptional debugging and troubleshooting skills across Client-side, Server-side, and Database domains. Proficiency in static code analysis and performance management tools across Client-side, Server-side, and Database domains. Expertise in one or more databases such as MySQL, PostgreSQL, Oracle. Exposure to Hibernate or similar ORM framework. Understanding of Release, Build Management, Deployment steps, and methodologies. Nice to have Cloud platforms (AWS/GCP/Azure), Kafka/RabbitMQ, MongoDB, Hadoop, jUnit, Jasmine, Karma Qualifications B.E. / B.Tech. / MCA 4+ years of relevant experience Show more Show less

Posted 1 week ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are hiring for Digit88 About Digit88 Digit88 empowers digital transformation for innovative and high growth B2B and B2C SaaS companies as their trusted offshore software product engineering partner! We are a lean mid-stage software company, with a team of 75+ fantastic technologists, backed by executives with deep understanding of and extensive experience in consumer and enterprise product development across large corporations and startups. We build highly efficient and effective engineering teams that solve real and complex problems for our partners. With more than 50+ years of collective experience in areas ranging from B2B and B2C SaaS, web and mobile apps, e-commerce platforms and solutions, custom enterprise SaaS platforms and domains spread across Conversational AI, Chatbots, IoT, Health-tech, ESG/Energy Analytics, Data Engineering, the founding team thrives in a fast paced and challenging environment that allows us to showcase our best. The Vision: To be the most trusted technology partner to innovative software product companies world-wide The Opportunity Digit88 development team is establishing a new offshore product development team for its partner , that is building next-generation Big Data, Cloud-Based Business Operation Support technology for utilities, retail energy suppliers and Community Choice Aggregators (CCA). The candidate would be joining an existing team of outstanding data engineers in the US and help us expand the data engineering team and work on different products and on different layers of the infrastructure. Job Profile Digit88 is looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining,implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Applicants must have a passion for engineering with accuracy and efficiency, be highly motivated and organized, able to work as part of a team, and also possess the ability to work independently with minimal supervision. To be successful in this role, you should possess Collaborate closely with Product Management and Engineering leadership to devise and build the right solution. Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big Data tools and frameworks required to solve Big Data problems at scale. Design and implement systems to cleanse, process, and analyze large data sets using distributed processing tools like Akka and Spark. Understanding and critically reviewing existing data pipelines, and coming up with ideas in collaboration with Technical Leaders and Architects to improve upon current bottlenecks Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. 8+ years of experience in developing highly scalable Big Data pipelines. Hands on exp in team leading and leading product or module development experience In-depth understanding of the Big Data ecosystem including processing frameworks like Spark, Akka, Storm, and Hadoop, and the file types they deal with. Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc. Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design Patterns when required. Experience with Git and build tools like Gradle/Maven/SBT. Strong understanding of object-oriented design, data structures, algorithms, profiling, and optimization. Have elegant, readable, maintainable and extensible code style. You are someone who would easily be able to Work closely with the US and India engineering teams to help build the Java/Scala based data pipelines Lead the India engineering team in technical excellence and ownership of critical modules; own the development of new modules and features Troubleshoot live production server issues. Handle client coordination and be able to work as a part of a team, be able to contribute independently and drive the team to exceptional contributions with minimal team supervision Follow Agile methodology, JIRA for work planning, issue management/tracking Additional Project/Soft Skills: Should be able to work independently with India & US based team members. Strong verbal and written communication with ability to articulate problems and solutions over phone and emails. Strong sense of urgency, with a passion for accuracy and timeliness. Ability to work calmly in high pressure situations and manage multiple projects/tasks. Ability to work independently and possess superior skills in issue resolution. Should have the passion to learn and implement, analyse and troubleshoot issues Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact— every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. Function Description: SABE P&C supports 20+ diverse platforms and 25k+ users (Sales, Marketing, Analytical) across the Enterprise. It plays a pivotal role to drive critical aspects of platform & product lifecycle, with a strong focus on enhancing customer experience & adoption. SABE P&C vision is to Enhance user experience & agility for Enterprise Platforms at optimized cost. SABE P&C iPlus team is the Product Owner of the Enterprise Incentive Management Product/Tool: Varicent ICM. Team’s vision is to automate controls across end-to-end Incentive Management & continue to enable timely & accurate incentive payouts for Sales and Servicing colleagues across the Enterprise (GSG, TLS, GCS, ICS, GMNS), with strong focus on strengthening controls, driving efficiencies, and improving user experience. This role is to drive the modernization for Incentive Payout system. Key Responsibilities: • Analyze key business requirements and identify KPIs, business drivers etc. • Coordinate effectively with architects, technical team, and business team to convert business requirements into technical stories as well as deliver on iPlus features. • Develop end to end business understanding as well as System architecture to better support Business needs • Design and implement Varicent workflows, rules, and calculations to meet business needs and ensuring high quality and adherence to timelines. • Drive complete User Acceptance Testing starting from scenario identification to complete execution. • Identify and deliver on future looking features to drive better customer experience. • Your duties will involve the troubleshooting of data issues, creating reports, working with the team at aspect of the incentives process. • To ensure all payouts go out on time and error-free Skills/Capabilities Functional: • 3-5 Years of Experience in resolving and optimizing complex business problems using Varicent ICM • Must have detailed business knowledge around the incentives process and business rules (ramifications of exceptions) • Understanding of sales performance management domain • Expertise in design / development / Testing (SIT/UAT/QAT) i.e. all stages of SDLC • Knowledge of User experience principles • Experience of End-to-End Software Product Solution implementation • Experience of User story drafting (Agile perspective) Technical • Varicent ICM Platform ( MUST HAVE) • HTML, CSS & Java Script • RDBMS / Big Data / Hadoop • Rally & JIRA • SQL Preferred: · Exposure to other Incentive platforms like Callidus, Anaplan etc. · Experience in Data analytics and Automation Compliance Language We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: • Competitive base salaries • Bonus incentives • Support for financial-well-being and retirement • Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) • Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need • Generous paid parental leave policies (depending on your location) • Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) • Free and confidential counseling support through our Healthy Minds program • Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About the Role You’ll work directly with the founder. Learning fast, owning small but meaningful pipeline tasks, and shipping production code exactly to spec. What You’ll Do  In this role you’ll build and ship ETL/ELT pipelines in Python or Scala, crafting and tuning the necessary SQL transformations, while closely following my design documents and verbal briefs and iterating quickly on feedback until the output matches requirements. You’ll keep the codebase healthy by working through Git feature branches and pull requests, adding unit tests, and adhering to our pre-commit hooks. Day-to-day work will involve operating across AWS services such as EMR/Spark as projects demand. Learning is continuous: we’ll pair regularly for reviews and debugging, and you’ll present your progress during short weekly catch-ups. Must-Have Basics Up to 6 months practical experience (internship, project, or personal lab) in data engineering Working knowledge of Python or Scala and solid SQL Basic Git workflow familiarity Conceptual understanding of big-data tooling (Spark/Hadoop) Exposure to at least core AWS storage/compute services Strong willingness to take direction, ask questions, and iterate quickly Reside in Ahmedabad and commit to full-time office work Nice-to-Haves Docker or Airflow familiarity Data-modeling (star/snowflake, SCD) basics Hackathon or open-source contributions Compensation & Perks ₹15,000 – ₹30,000 / month (intern / junior band) Direct 1-on-1 mentorship from a senior data engineer & founder Dedicated learning budget after 90 days Comfortable workspace, high-end dev laptop, free coffee/snacks How to Apply Apply with your résumé (PDF). In the note, share a link to code or briefly describe a data project you built. Shortlisted candidates will have an on-site interview (python and SQL discussions) Location : S.G.Highway, Ahmedabad Timing : 8-9 hours (Flexible) Experience : 0 to 6 months If you’re hungry to learn, enjoy clear guidance, and want to grow into a full-stack data engineer, I’d love to hear from you. Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Overview Job Title: Full Stack, AVP Location: Bangalore, India Role Description Responsible for developing, enhancing, modifying and/or maintaining applications in the Enterprise Risk Technology environment. Software developers design, code, test, debug and document programs as well as support activities for the corporate systems architecture. Employees work closely with business partners in defining requirements for system applications. Employees typically have in-depth knowledge of development tools and languages. Is clearly recognized as a content expert by peers. Individual contributor role. Typically requires 10-15 years of applicable experience. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Responsible for developing software in Java, object-oriented database and grid using kubernetes & open shift platform. Responsible for building REST web services Responsible for designing interface between UI and REST service. Responsible for building data-grid centric UI. Participating fully in the development process through the entire software lifecycle. Participating fully in agile software development process Use BDD techniques, collaborating closely with users, analysts, developers, and other testers. Make sure we are building the right thing. Write code and write it well. Be proud to call yourself a programmer. Use test driven development, write clean code, and refactor constantly. Make sure we are building the thing right. Be ready to work on a range of technologies and components, including user interfaces, services, and databases. Act as a generalizing specialist. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level. Ensure that the software you build is reliable and easy to support in production. Be prepared to take your turn on call providing 3rd line support when it’s needed Help your team to build, test and release software within short lead times and with minimum of waste. Work to develop and maintain a highly automated Continuous Delivery pipeline. Help create a culture of learning and continuous improvement within your team and beyond Your Skills And Experience Deep Knowledge of at least one modern programming language, along with understanding of both object oriented and functional programming. Ideally knowledge of Java and Scala. Practical experience of test-driven development and constant refactoring in continuous integration environment. Practical experience of web technologies, frameworks and tools like HTML, CSS, JavaScript, React Experience or Exposure to Big Data Hadoop technologies / BI tools will be an added advantage Experience in Oracle PL/SQL programming is required Knowledge of SQL and relational databases Experience working in an agile team, practicing Scrum, Kanban or XP Experience of performing Functional Analysis is highly desirable The ideal candidate will also have: Behavior Driven Development, particularly experience of how it can be used to define requirements in a collaborative manner to ensure the team builds the right thing and create a system of living documentation Good to have range of technologies that store, transport, and manipulate data, for example: NoSQL, document databases, graph databases, Hadoop/HDFS, streaming and messaging Will be Added Advantage if candidate has exposure to Architecture and design approaches that support rapid, incremental, and iterative delivery, such as Domain Driven Design, CQRS, Event Sourcing and micro services. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Overview Job Title: Full Stack, AVP Location: Bangalore, India Role Description Responsible for developing, enhancing, modifying and/or maintaining applications in the Enterprise Risk Technology environment. Software developers design, code, test, debug and document programs as well as support activities for the corporate systems architecture. Employees work closely with business partners in defining requirements for system applications. Employees typically have in-depth knowledge of development tools and languages. Is clearly recognized as a content expert by peers. Individual contributor role. Typically requires 10-15 years of applicable experience. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Responsible for developing software in Java, object-oriented database and grid using kubernetes & open shift platform. Responsible for building REST web services Responsible for designing interface between UI and REST service. Responsible for building data-grid centric UI. Participating fully in the development process through the entire software lifecycle. Participating fully in agile software development process Use BDD techniques, collaborating closely with users, analysts, developers, and other testers. Make sure we are building the right thing. Write code and write it well. Be proud to call yourself a programmer. Use test driven development, write clean code, and refactor constantly. Make sure we are building the thing right. Be ready to work on a range of technologies and components, including user interfaces, services, and databases. Act as a generalizing specialist. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level. Ensure that the software you build is reliable and easy to support in production. Be prepared to take your turn on call providing 3rd line support when it’s needed Help your team to build, test and release software within short lead times and with minimum of waste. Work to develop and maintain a highly automated Continuous Delivery pipeline. Help create a culture of learning and continuous improvement within your team and beyond Your Skills And Experience Deep Knowledge of at least one modern programming language, along with understanding of both object oriented and functional programming. Ideally knowledge of Java and Scala. Practical experience of test-driven development and constant refactoring in continuous integration environment. Practical experience of web technologies, frameworks and tools like HTML, CSS, JavaScript, React Experience or Exposure to Big Data Hadoop technologies / BI tools will be an added advantage Experience in Oracle PL/SQL programming is required Knowledge of SQL and relational databases Experience working in an agile team, practicing Scrum, Kanban or XP Experience of performing Functional Analysis is highly desirable The ideal candidate will also have: Behavior Driven Development, particularly experience of how it can be used to define requirements in a collaborative manner to ensure the team builds the right thing and create a system of living documentation Good to have range of technologies that store, transport, and manipulate data, for example: NoSQL, document databases, graph databases, Hadoop/HDFS, streaming and messaging Will be Added Advantage if candidate has exposure to Architecture and design approaches that support rapid, incremental, and iterative delivery, such as Domain Driven Design, CQRS, Event Sourcing and micro services. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Basavanagudi, Bengaluru, Karnataka

On-site

Indeed logo

We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Infrastructure Lead/Architect Job Type: Full-Time Location: On-site Hyderabad, Pune or New Delhi Job Summary Join our customer's team as an Infrastructure Lead/Architect and play a pivotal role in architecting, designing, and implementing next-generation cloud infrastructure solutions. You will drive cloud and data platform initiatives, ensure system scalability and security, and act as a technical leader, shaping the backbone of our customers’ mission-critical applications. Key Responsibilities Architect, design, and implement robust, scalable, and secure AWS cloud infrastructure utilizing services such as EC2, S3, Lambda, RDS, Redshift, and IAM. Lead the end-to-end design and deployment of high-performance, cost-efficient Databricks data pipelines, ensuring seamless integration with business objectives. Develop and manage data integration workflows using modern ETL tools in combination with Python and Java scripting. Collaborate with Data Engineering, DevOps, and Security teams to build resilient, highly available, and compliant systems aligned with operational standards. Act as a technical leader and mentor, guiding cross-functional teams through infrastructure design decisions and conducting in-depth code and architecture reviews. Oversee project planning, resource allocation, and deliverables, ensuring projects are executed on-time and within budget. Proactively identify infrastructure bottlenecks, recommend process improvements, and drive automation initiatives. Maintain comprehensive documentation and uphold security and compliance standards across the infrastructure landscape. Required Skills and Qualifications 8+ years of hands-on experience in IT infrastructure, cloud architecture, or related roles. Extensive expertise with AWS cloud services; AWS certifications are highly regarded. Deep experience with Databricks, including cluster deployment, Delta Lake, and machine learning integrations. Strong programming and scripting proficiency in Python and Java. Advanced knowledge of ETL/ELT processes and tools such as Apache NiFi, Talend, Airflow, or Informatica. Proven track record in project management, leading cross-functional teams; PMP or Agile/Scrum certifications are a plus. Familiarity with CI/CD workflows and Infrastructure as Code tools like Terraform and CloudFormation. Exceptional problem-solving, stakeholder management, and both written and verbal communication skills. Preferred Qualifications Experience with big data platforms such as Spark or Hadoop. Background in regulated environments (e.g., finance, healthcare). Knowledge of Kubernetes and AWS container orchestration (EKS). Show more Show less

Posted 1 week ago

Apply

0.0 - 9.0 years

0 Lacs

Delhi

On-site

Indeed logo

Bangalore/ Delhi Data / Full Time / Hybrid What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi Hybrid- 3 days onsite We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. RESPONSIBILITIES Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful SKILL REQUIREMENTS Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have - Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.

Posted 1 week ago

Apply

0.0 - 18.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 1 week ago

Apply

0.0 - 6.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Noida, Uttar Pradesh, India Qualification : Pre-Sales Solution Engineer - India Experience areas or Skills : Pre-Sales experience of Software or analytics products Excellent verbal & written communication skills OLAP tools or Microsoft Analysis services (MSAS) Data engineering or Data warehouse or ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Tableau or Micro strategy or any BI tool Hive QL or Spark SQL or PLSQL or TSQL Writing and troubleshooting SQL programming or MDX queries Working on Linux programming in Python, Java or Java Script would be a plus Filling RFP or Questioner from Customer NDA, Success Criteria, Project closure and other Documentation Be willing to travel or relocate as per requirement Role : Acts as main point of contact for Customer contacts involved in the evaluation process Product demonstrations to qualified leads Product demonstrations in support of marketing activity such as events or webinars Own RFP, NDA, PoC success criteria document, POC Closure and other documents Secures alignment on Process and documents with the customer / prospect Owns the technical win phases of all active opportunities Understand Customer domain and database schema Providing OLAP and Reporting solution Work closely with customers for understanding and resolving environment or OLAP cube or reporting related issues Co-ordinate with solutioning team for execution of PoC as per success plan Creates enhancement requests or identify requests for new features on behalf of customers or hot prospects Experience : 3 to 6 years Job Reference Number : 10771

Posted 1 week ago

Apply

0.0 - 20.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Pune, Maharashtra, India;Hyderabad, Telangana, India;Noida, Uttar Pradesh, India Qualification : 15+ years of experience in the role of managing and implementing of high-end software products. Expertise in Java/ J2EE or EDW/SQL OR Hadoop/Hive/Spark and preferably hands-on. Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have Managed/ delivered and implemented complex projects dealing with considerable data size (TB/ PB) and with high complexity Experience in handling migration projects Good to have: Data Ingestion, Processing and Orchestration knowledge Skills Required : Java Architecture, Big Data, Cloud Technologies Role : Senior Technical Project Managers (STPMs) are in charge of handling all aspects of technical projects. This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. You should collaborate with, and leverage, colleagues in business development, product management, analytics, marketing, engineering, and partner organizations. You have to manage multiple projects and ensures all releases on time. You are responsible for manage and deliver the technical solution to support an organization’s vision and strategic direction. The technology program manager delivers the technical solution to support an organization’s vision and strategic direction. You should be capable to working with a different type of customer and should possess good customer handling skills. Experience in working in ODC model and capable of presenting the Technical Design and Architecture to Senior Technical stakeholders. Should have experience in defining the project and delivery plan for each assignment Capable of doing resource allocations as per the requirements for each assignment Should have experience of driving RFPs. Should have experience of Account management – Revenue Forecasting, Invoicing, SOW creation etc. Experience : 15 to 20 years Job Reference Number : 13010

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Chennai, Tamil Nadu, India Qualification : Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and end engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Skills Required : Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Role : Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and end engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Experience : 4 to 7 years Job Reference Number : 12907

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : 5-7 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Skills Required : Python, Pyspark, AWS Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 8 to 10 years Job Reference Number : 13025

Posted 1 week ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies