Jobs
Interviews

552 Hbase Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

2 - 5 Lacs

Chennai

Work from Office

Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices

Posted 1 month ago

Apply

8.0 - 12.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Happiest Minds Technologies Pvt.Ltd is looking for Sr Data and ML Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. Spark ML Lib,Scala,Python,Databricks on AWS, Snowflake, GitLab, Jenkins, AWS DevOps CI/CD pipeline, Machine Learning, Airflow

Posted 1 month ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Work from Office

SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) along with GCP. Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.

Posted 2 months ago

Apply

5.0 - 7.0 years

5 - 8 Lacs

Pune

Work from Office

Job Summary: Cummins is seeking a skilled Data Engineer to support the development, maintenance, and optimization of our enterprise data and analytics platform. This role involves hands-on experience in software development , ETL processes , and data warehousing , with strong exposure to tools like Snowflake , OBIEE , and Power BI . The engineer will collaborate with cross-functional teams, transforming data into actionable insights that enable business agility and scale. Please NoteWhile the role is categorized as remote, it will follow a hybrid work model based out of our Pune office . Key Responsibilities: Design, develop, and maintain ETL pipelines using Snowflake and related data transformation tools. Build and automate data integration workflows that extract, transform, and load data from various sources including Oracle EBS and other enterprise systems. Analyze, monitor, and troubleshoot data quality and integrity issues using standardized tools and methods. Develop and maintain dashboards and reports using OBIEE , Power BI , and other visualization tools for business stakeholders. Work with IT and Business teams to gather reporting requirements and translate them into scalable technical solutions. Participate in data modeling and storage architecture using star and snowflake schema designs. Contribute to the implementation of data governance , metadata management , and access control mechanisms . Maintain documentation for solutions and participate in testing and validation activities. Support migration and replication of data using tools such as Qlik Replicate and contribute to cloud-based data architecture . Apply agile and DevOps methodologies to continuously improve data delivery and quality assurance processes. Why Join Cummins? Opportunity to work with a global leader in power solutions and digital transformation. Be part of a collaborative and inclusive team culture. Access to cutting-edge data platforms and tools. Exposure to enterprise-scale data challenges and finance domain expertise . Drive impact through data innovation and process improvement . Competencies Data Extraction & Transformation - Ability to perform ETL activities from varied sources with high data accuracy. Programming - Capable of writing and testing efficient code using industry standards and version control systems. Data Quality Management - Detect and correct data issues for better decision-making. Solution Documentation - Clearly document processes, models, and code for reuse and collaboration. Solution Validation - Test and validate changes or solutions based on customer requirements. Problem Solving - Address technical challenges systematically to ensure effective resolution and prevention. Customer Focus - Understand business requirements and deliver user-centric data solutions. Communication & Collaboration - Work effectively across teams to meet shared goals. Values Differences - Promote inclusion by valuing diverse perspectives and backgrounds. Education, Licenses, Certifications Bachelor s or Master s degree in Computer Science, Information Systems, Data Engineering, or a related technical discipline. Certifications in data engineering or relevant tools (Snowflake, Power BI, etc.) are a plus. Experience Must have skills 5-7 years of experience in data engineering or software development , preferably within a finance or enterprise IT environment. Proficient in ETL tools , SQL , and data warehouse development . Proficient in Snowflake , Power BI , and OBIEE reporting platforms. Must have worked in implementation using these tools and technologies. Strong understanding of data warehousing principles , including schema design (star/snowflake), ER modeling, and relational databases. Working knowledge of Oracle databases and Oracle EBS structures. Preferred Skills: Experience with Qlik Replicate , data replication , or data migration tools. Familiarity with data governance , data quality frameworks , and metadata management . Exposure to cloud-based architectures, Big Data platforms (e.g., Spark, Hive, Kafka), and distributed storage systems (e.g., HBase, MongoDB). Understanding of agile methodologies (Scrum, Kanban) and DevOps practices for continuous delivery and improvement.

Posted 2 months ago

Apply

8.0 - 13.0 years

9 - 13 Lacs

Gurugram

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Graph Analytics Familiarity with Graph Algorithms (Directed & Undirected) Traversal (BFS, DFS), Cycle Detection (Bellman Ford, Flyod Warshall), Shortest Path (Dijkstra, A*) etc. Building Knowledge Graphs with unstructured data and knowledge graph optimizations like PageRank/TrustRank is expected Mathematical Optimization Familiarity with common optimization algorithms, both discrete Linear, Mixed-Integer, Goal, Dynamic etc and continuous GD and its variants, Newtons method etc. is expected. Experience with Simulated Annealing and exposure to ML inspired evolutionary optimization algorithms like Genetic Algorithm & Genetic Programming for optimization is a plus. Simulations Monte Carlo Simulation, Discrete-Event Simulation, Agent-Based Simulation, Hybrid Simulation, System Dynamics, Genetic Algorithm based Simulation. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Your Profile Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

About PhonePe Group PhonePe is India s leading digital payments company with 50 crore (500 Million) registered users and 3.7 crore (37 Million) merchants covering over 99% of the postal codes across India. On the back of its leadership in digital payments, PhonePe has expanded into financial services (Insurance, Mutual Funds, Stock Broking, and Lending) as well as adjacent tech-enabled businesses such as Pincode for hyperlocal shopping and Indus App Store which is India's first localized App Store. The PhonePe Group is a portfolio of businesses aligned with the company's vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Challenges Building for Scale, Rapid Iterative Development, and Customer-centric Product Thinking at each step defines every day for a developer at PhonePe. Though we engineer for a 50million+ strong user base, we code with every individual user in mind. While we are quick to adopt the latest in Engineering, we care utmost for security, stability, and automation. Apply if you want to experience the best combination of passionate application development and product-driven thinking As a Software Engineer: You will build Robust and scalable web-based applications You will need to think of platforms & reuse Build abstractions and contracts with separation of concerns for a larger scope Drive problem-solving skills for high-level business and technical problems. Do high-level design with guidance; Functional modeling, break-down of a module Do incremental changes to architectureimpact analysis of the same Do performance tuning and improvements in large scale distributed systems Mentor young minds and foster team spirit, break down execution into phases to bring predictability to overall execution Work closely with Product Manager to derive capability view from features/solutions, Lead execution of medium-sized projects Work with broader stakeholders to track the impact of projects/features and proactively iterate to improve them As a senior software engineer you must have Extensive and expert programming experience in at least one general programming language (e.g. Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code. Experience with multi-threading and concurrency programming Extensive experience in object-oriented design skills, knowledge of design patterns, and huge passion and ability to design intuitive module and class-level interfaces Excellent coding skills - should be able to convert the design into code fluently Knowledge of Test Driven Development Good understanding of databases (e.g. MySQL) and NoSQL (e.g. HBase, Elasticsearch, Aerospike, etc) Strong desire to solving complex and interesting real-world problems Experience with full life cycle development in any programming language on a Linux platform Go-getter attitude that reflects in energy and intent behind assigned tasks Worked in a startups environment with high levels of ownership and commitment BTech, MTech, or Ph.D. in Computer Science or related technical discipline (or equivalent). Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with a huge amount of data. 5+ years of experience in the art of writing code and solving problems on a Large Scale. An open communicator who shares thoughts and opinions frequently listens intently and takes constructive feedback. As a Software Engineer, good to have The ability to drive the design and architecture of multiple subsystems Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product Understanding of the industry s coding standards and an ability to create appropriate technical documentation. PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe on our blog . Life at PhonePe PhonePe in the news

Posted 2 months ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Bengaluru

Work from Office

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in Marketing, Supply Chain Optimization, Network Security and Personalization rely on. Position Overview As a Senior Engineer on the Search Team , you serve as a specialist in the engineering team that supports the product. You help develop and gain insight in the application architecture. You can distill an abstract architecture into concrete design and influence the implementation. You show expertise in applying the appropriate software engineering patterns to build robust and scalable systems. You are an expert in programming and apply your skills in developing the product. You have the skills to design and implement the architecture on your own, but choose to influence your fellow engineers by proposing software designs, providing feedback on software designs and/or implementation. You leverage data science in solving complex business problems. You make decisions based on data. You show good problem solving skills and can help the team in triaging operational issues. You leverage your expertise in eliminating repeat occurrences. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Experience with Search Engines like SOLR and Elastic Search Strong hands-on programming skills in Java, Kotlin, Micronaut, Python, Experience on Pyspark, SQL, Hadoop/Hive is added advantage Experience on streaming systems like Kakfa. Experience on Kafka Streams is added advantage. Experience in MLOps is added advantage Experience in Data Engineering is added advantage Strong analytical thinking skills with an ability to creatively solve business problems, innovating new approaches where required Able to produce reasonable documents/narrative suggesting actionable insights Self-driven and results oriented Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 2 months ago

Apply

12.0 - 15.0 years

13 - 17 Lacs

Mumbai

Work from Office

Job Information Job Opening ID ZR_1688_JOB Date Opened 24/12/2022 Industry Technology Job Type Work Experience 12-15 years Job Title Big Data Architect City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 4 LocationMumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata 12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 2 months ago

Apply

6.0 - 10.0 years

3 - 7 Lacs

Chennai

Work from Office

Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 2 months ago

Apply

5.0 - 8.0 years

2 - 5 Lacs

Chennai

Work from Office

Job Information Job Opening ID ZR_2168_JOB Date Opened 10/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title AWS Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600002 Number of Positions 4 Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 2 months ago

Apply

5.0 - 8.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 2 months ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD

Posted 2 months ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 2 months ago

Apply

7.0 - 12.0 years

9 - 15 Lacs

Bengaluru

Work from Office

We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django .

Posted 2 months ago

Apply

2.0 - 4.0 years

2 - 6 Lacs

Hyderabad

Work from Office

Fusion Plus Solutions Inc is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Tech Stalwart Solution Private Limited is looking for Sr. Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

10.0 - 15.0 years

16 - 20 Lacs

Pune

Work from Office

Arista Networks is seeking high-caliber self-motivated test engineers for its enterprise and cloud-managed WiFi products. This role will involve testing and validating complex cloud infrastructure and services that power WiFi networks, ensuring they perform at scale and meet the highest standards of reliability, security, and efficiency. You will play a key role in the design, development, and execution of comprehensive test plans for cloud-based Wi-Fi solutions, working closely with cross-functional teams to deliver industry-leading products. Responsibilities: Lead the testing efforts on the cloud infrastructure that supports Wi-Fi networks, ensuring scalability, performance, and resilience in different environments (Cloud and On-premise deployments) Design and execute performance, scalability, and load tests to ensure the cloud services can handle heavy traffic and provide reliable WiFi services under various conditions. Collaborate with engineering teams to conduct integration tests for cloud-based services that interface with hardware. Ensure seamless interactions between cloud APIs and the Access Points. Create solution based detailed test plans, test cases, and test reports for all stages of the development lifecycle. Ensure traceability between requirements, test cases, and defects. Own the feature and core area of the product, and be the single point of contact and testing expert for your deliverables. Track defects and issues using defect management systems, ensuring they are escalated appropriately and resolved in a timely manner. Work closely with product managers, developers, and other stakeholders to understand requirements, provide feedback, and ensure high-quality product delivery. Participate in escalations debugging and reproduce the complex issues reported by customers. Mentor junior team members and share best practices in testing and cloud-based WiFi solution validation. Bachelor s degree in E&TC/EE/CE/CS/IT or a related field; Master s degree a plus. 10 - 15 years of experience in testing & troubleshooting Cloud technologies, virtualization, Microservices, and debugging complex software Good experience in working with Hadoop & Big data technologies like HDFS, HBase, Spark, Kafka streaming, and Elasticsearch Good working knowledge of Database (SQL) and Server side validations. Experience with various performance testing methods (Load test, Stress test, and capacity test) will be a plus. Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, and fault tolerance. Experience in Networking Tools and networking fundamentals across the OSI stack is a plus. Strong analytical and debugging skills and proven expertise in creating test methodologies and writing test plans. Strong knowledge of Linux systems. Familiarity with scripting languages such as Python will be a plus. Excellent written and oral English communication skills. Adaptive to learning new things and technologies

Posted 2 months ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

The role requires strategic thinking and planning and provide expertise throughout the entire product development life cycle with a strong knowledge of SAS Viya programming, API architecture , Kubernetes , Risk and Finance domain. Also the role requires ownership, making sure that quality is baked in from the start Key Responsibilities N/A Skills and Experience Hands on in Python - Data Manipulation- Pandas, Nunphy, SAS developer framework and expert in SAS development work Desirable skills on SAS admin Framework. Desirable skills on Hadoop development Framework. Sound statistical knowledge, analytical and problem-solving skills are desirable. Good to have knowledge on Big data technologies (Hortonworks HDP, Apache Hadoop, HDFS, Hive, Sqoop, Flume, Zookeeper and HBase, Oozie, Spark, Ni-Fi, Kafka, Snap logic, AWS, Red shift). Have experience with monitoring tools. Development capabilities using python, spark, sas, R languages. Good management and analytical skill Good writing and oral communication skills Good understanding of and experience in projects (e. g. SDLC, Agile methodology) Desirable skills in Big Data space (Hadoop Stack like HDFS, Pig, Hive, HBase, Sqoop etc) Ability to debug & write / modify Shell script/Python Willing to work on-call support over weekends Liaise with multiple application teams & co-ordinate for issue resolution Good analytical & interaction skills Qualifications N/A Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. 26272

Posted 2 months ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Pune

Work from Office

Who You ll Work With Arista Networks is seeking high-caliber self-motivated test engineers for its enterprise and cloud-managed WiFi products. This role will involve testing and validating complex cloud infrastructure and services that power WiFi networks, ensuring they perform at scale and meet the highest standards of reliability, security, and efficiency. You will play a key role in the design, development, and execution of comprehensive test plans for cloud-based Wi-Fi solutions, working closely with cross-functional teams to deliver industry-leading products. What You ll Do Lead the testing efforts on the cloud infrastructure that supports Wi-Fi networks, ensuring scalability, performance, and resilience in different environments (Cloud and On-premise deployments) Design and execute performance, scalability, and load tests to ensure the cloud services can handle heavy traffic and provide reliable WiFi services under various conditions. Collaborate with engineering teams to conduct integration tests for cloud-based services that interface with hardware. Ensure seamless interactions between cloud APIs and the Access Points. Create solution based detailed test plans, test cases, and test reports for all stages of the development lifecycle. Ensure traceability between requirements, test cases, and defects. Own the feature and core area of the product, and be the single point of contact and testing expert for your deliverables. Track defects and issues using defect management systems, ensuring they are escalated appropriately and resolved in a timely manner. Work closely with product managers, developers, and other stakeholders to understand requirements, provide feedback, and ensure high-quality product delivery. Participate in escalations debugging and reproduce the complex issues reported by customers. Mentor junior team members and share best practices in testing and cloud-based WiFi solution validation. Bachelor s degree in E&TC/EE/CE/CS/IT or a related field; Master s degree a plus. 10 - 15 years of experience in testing & troubleshooting Cloud technologies, virtualization, Microservices, and debugging complex software

Posted 2 months ago

Apply

3.0 - 6.0 years

13 - 18 Lacs

Pune

Work from Office

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZS’s Platform Developmentteam designs, implements, tests and supports ZS’s ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products & analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do : Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality productsautomated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring : Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Understanding to Data Science Algorithms God to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 2 months ago

Apply

5.0 - 7.0 years

14 - 18 Lacs

Bengaluru

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 2 months ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Work from Office

SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.

Posted 2 months ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Pune

Work from Office

Design, develop & maintain scalable data pipelines & systems using PySpark and other big data tools. Monitor, troubleshoot & resolve issues in data workflows & pipelines. Implement best practices for data processing, security & storage. Required Candidate profile Strong programming skills in PySpark & Python. Exp with big data frameworks like Hadoop, Spark, or Kafka. Proficiency in working with cloud platforms. Exp with data modeling & working with databases.

Posted 2 months ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Our Purpose Title and Summary Lead Software Engineer - Java developer, Cloud Job Description Summary Overview Offers team in the Loyalty group at Mastercard is at the heart of providing end consumers highly personalized card linked offers by marrying offers from our advertising network with anonymized transaction data. We are looking for strong, innovative engineers who can bring unique perspectives and innovative ideas to all areas of development and are interested in continuing to improve our platform through the ever-changing technology landscape. In this role, you will work on multiple projects, which are critical to Mastercard s role. An ideal candidate is someone who works well with a cohesive team-oriented group of developers, who has a passion for developing great software, who enjoys solving problems in a challenging environment, and who has the desire to take their career to the next level. If any of these opportunities excite you, we would love to talk. Responsibilities: Design, develop, test, deploy, maintain, and continuously improve software products Manage individual project priorities, deadlines, and deliverables Ensure the final product is highly performant, responsive, and aligned with business goals Actively participate in agile ceremonies including daily stand-ups, story pointing, backlog refinement, and retrospectives Collaborate with cross-functional teams including Product, Business, and Engineering Coach and mentor junior and new team members to foster professional growth and team alignment All About You / Experience: Proven experience as a senior or lead software engineer with a strong technical foundation Proficient in at least one modern programming language, preferably Java Strong knowledge of object-oriented design, data structures, algorithms, and complexity analysis Extensive experience building RESTful APIs and designing microservices-based architectures Hands-on expertise with React, Spring Boot, PostgreSQL, and Elasticsearch Demonstrated full stack development experience, including frontend and backend integration Familiarity with cloud platforms, along with distributed systems Exposure to Big Data technologies like Apache Spark, Python, and HBase is a plus Strong communication and leadership skills with a proactive, collaborative mindset

Posted 2 months ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Pune

Work from Office

Lead Software Engineer - Java developer, Cloud Job Description Summary Overview Offers team in the Loyalty group at Mastercard is at the heart of providing end consumers highly personalized card linked offers by marrying offers from our advertising network with anonymized transaction data. We are looking for strong, innovative engineers who can bring unique perspectives and innovative ideas to all areas of development and are interested in continuing to improve our platform through the ever-changing technology landscape. In this role, you will work on multiple projects, which are critical to Mastercard s role. An ideal candidate is someone who works well with a cohesive team-oriented group of developers, who has a passion for developing great software, who enjoys solving problems in a challenging environment, and who has the desire to take their career to the next level. If any of these opportunities excite you, we would love to talk. Responsibilities: Design, develop, test, deploy, maintain, and continuously improve software products Manage individual project priorities, deadlines, and deliverables Ensure the final product is highly performant, responsive, and aligned with business goals Actively participate in agile ceremonies including daily stand-ups, story pointing, backlog refinement, and retrospectives Collaborate with cross-functional teams including Product, Business, and Engineering Coach and mentor junior and new team members to foster professional growth and team alignment All About You / Experience: Proven experience as a senior or lead software engineer with a strong technical foundation Proficient in at least one modern programming language, preferably Java Strong knowledge of object-oriented design, data structures, algorithms, and complexity analysis Extensive experience building RESTful APIs and designing microservices-based architectures Hands-on expertise with React, Spring Boot, PostgreSQL, and Elasticsearch Demonstrated full stack development experience, including frontend and backend integration Familiarity with cloud platforms, along with distributed systems Exposure to Big Data technologies like Apache Spark, Python, and HBase is a plus Strong communication and leadership skills with a proactive, collaborative mindset

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies