Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 14 Lacs
gurugram
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive...
Posted 1 week ago
6.0 - 11.0 years
6 - 10 Lacs
mumbai, pune, bengaluru
Work from Office
Notice Period : Immediate to 15 Days. Job Description : - 6+ years of overall Data Analytics and BI experience - Experience in Spark, Hive, Scala. - Build data pipelines for ETL that fetch data from variety of sources such as flat files relational databases and APIs - Python scripting with focus on data transformation and manipulation libraries such as Pandas and numpy - Strong knowledge and hands-on experience of SQL (should be able to write advance level SQL queries) - Good hands-on experience on data visualization tools such as Power BI, Tableau, Looker - Good understanding and hands-on experience of Data engineering pipeline management tools such as Airflow - Good Communication skills.
Posted 1 week ago
6.0 - 9.0 years
27 - 42 Lacs
pune
Work from Office
Skills- Pyspark Experience: 6 to 9 years Location: AIA-Pune Responsibilities Develop and maintain scalable data pipelines using Python and PySpark. Collaborate with data engineers and data scientists to understand and fulfil data processing needs. Optimize and troubleshoot existing PySpark applications for performance improvements. Write clean, efficient, and well-documented code following best practices. Participate in design and code reviews. Develop and implement ETL processes to extract, transform, and load data. Ensure data integrity and quality throughout the data lifecycle. Stay current with the latest industry trends and technologies in big data and cloud computing.
Posted 1 week ago
2.0 - 5.0 years
5 - 9 Lacs
hyderabad
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving...
Posted 1 week ago
2.0 - 5.0 years
5 - 9 Lacs
chennai
Work from Office
About The Role Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Alteryx Good to have skills : Hadoop Administration Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve collaborating with various stakeholders to gather insights, analyzing user needs, and translating them into functional specifications. You will en...
Posted 1 week ago
4.0 - 7.0 years
7 - 11 Lacs
noida
Work from Office
Key Responsibilities Architect and Develop Data Pipelines: Design and implement data pipelines that can handle the 4Vs of data (Volume, Variety, Velocity, Veracity) from diverse sources. Implement CDC: Utilize Change Data Capture (CDC) to efficiently and reliably synchronize data between source and target systems. Build a Medallion Architecture: Design and implement a data platform following the Medallion Architecture (Raw, Refined, Insights) to ensure data quality, manage transformations, and support analytical needs. Orchestrate and Manage Workflows: Develop and manage data orchestration processes to automate data ingestion, transformation, and delivery. Data Strategy & Business Value: Art...
Posted 1 week ago
6.0 - 8.0 years
5 - 9 Lacs
mumbai
Work from Office
Key Responsibilities: 1. Corporate Travel Management: Manage all aspects of corporate travel, including flight bookings, hotel accommodations, ground transportation, and visa arrangements for both domestic and international trips. Ensure adherence to company travel policies and procedures, optimizing travel costs while maintaining comfort and safety for employees. 2. Vendor & Rate Negotiation: Establish strong relationships with airlines, hotels, and travel service providers to negotiate the best rates and corporate add-ons such as upgrades, lounge access, and flexible booking options. Continually review and renegotiate rates to maximize cost savings for the company. 3. Internal Events & Gro...
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: At Improzo, we are looking for a highly skilled Data and Reporting Developer (Improzo Level - Associate) to join our dynamic team. As a Big Data Developer, you will be responsible for designing, developing, and maintaining large-scale data processing systems using big data technologies. This exciting opportunity is suitable for an individual with a strong technical background and a passion for working with large datasets to deliver high-quality solutions. Key Responsibilities: - Design, develop, and maintain scalable data pipelines and big data applications. - Work with distributed processing frameworks (e.g., Apache Hadoop, Apache Spark) to process and analyze large datasets....
Posted 1 week ago
1.0 - 6.0 years
15 - 19 Lacs
gurugram
Work from Office
Job Description : Trend-Sniping: Clock a viral moment in under 30 minutes and clap back with the perfect Boom Games-flavored take (apolitical, but savage). Engagement Warfare: Spark 100+ reply threads under influencer posts, without catching bans. Troll Alchemy: Flip haters into hype-trains with wit, charm, and zero toxicity. Compliance Jiu-Jitsu: Work with mods to only delete whats legally necessary, not whats fun. Vibe Curation: Youre not just a commenter, youre a personality. If people dont wanna reply to you, we dont want you. Come up with original content ideas from constantly perusing the interweb
Posted 1 week ago
6.0 - 10.0 years
5 - 9 Lacs
noida
Work from Office
AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Big Data - Big Data - Pyspark Beh - Communication Database - Database Programming - SQL
Posted 1 week ago
4.0 - 7.0 years
7 - 11 Lacs
noida
Work from Office
Key Responsibilities Architect and Develop Data Pipelines: Design and implement data pipelines that can handle the 4Vs of data (Volume, Variety, Velocity, Veracity) from diverse sources. Implement CDC: Utilize Change Data Capture (CDC) to efficiently and reliably synchronize data between source and target systems. Build a Medallion Architecture: Design and implement a data platform following the Medallion Architecture (Raw, Refined, Insights) to ensure data quality, manage transformations, and support analytical needs. Orchestrate and Manage Workflows: Develop and manage data orchestration processes to automate data ingestion, transformation, and delivery. Data Strategy & Business Value: Art...
Posted 1 week ago
15.0 - 25.0 years
40 - 45 Lacs
pune
Work from Office
As an owner of the product you will be required to plan and execute the product road map and provide technical leadership to the engineering team. You will have to collaborate with Product Management and Implementation teams and build a commercially successful product. You will be responsible to recruit & lead a team of highly skilled software engineers and provide strong hands onengineering leadership. Requirement deep technical knowledge in Software Product Engineering using Java/J2EE, Node.js, React.js, fullstack, NosqlDB, mongodb, cassandra, neo4j, elastic search, kibana, elk, kafka, redis, , kubernetes, apache, solr, activemq, rabbitmq, spark, scala, sqoop, hbase, hive, websocket, webcr...
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
noida
Work from Office
Computer science or similar education background. 7+ years total work experience. 5+ years hands on development experience in big data (Hadoop (cloudera preferred), spark, hive, pyspark, beeline, Scala, YARN, spark-submit, spark-java, parquet, ranger). Good understanding of big data storage concepts like partitioning, query and storage optimization, backup, restore and data replication. 5+ years experience in SDLC models (waterfall, and agile). 2+ years java coding experience. 2+ years experience in unix scripting and RDBMS Sql (e.g. sql server) Nice-to-have kills : Capital markets work experience Reference data (securities, trading books, client data, etc) Other systems involving large data...
Posted 1 week ago
3.0 - 7.0 years
4 - 8 Lacs
bengaluru
Work from Office
Immediate Openings onPyspark Experience : 5+ Skill:-Pyspark Location :- Bangalore Notice Period :- Immediate. Pyspark Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline
Posted 1 week ago
5.0 - 8.0 years
7 - 11 Lacs
hyderabad
Work from Office
Python, big data (Hive, spark or snowflake, Kafka), SQL/ SQLServer. Experience in Python for data processing.
Posted 1 week ago
3.0 - 6.0 years
5 - 9 Lacs
chennai
Work from Office
We are looking for a skilled Hadoop Developer with 3 to 6 years of experience to join our team in Chennai. The ideal candidate will have expertise in Scala, SQL, Unix, and Hive & Spark. Roles and Responsibility Design, develop, and implement scalable data processing solutions using Hadoop technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines using Scala and Spark. Troubleshoot and resolve complex technical issues related to Hadoop applications. Participate in code reviews and contribute to improving overall code quality. Stay up-to-date with the latest trends and technologies in Hadoop developmen...
Posted 1 week ago
7.0 - 12.0 years
4 - 8 Lacs
pune
Work from Office
Should be capable of developing/configuring data pipelines in a variety of platforms and technologies Possess the following technical skills SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) Can demonstrate strong experience in writing complex SQL queries to perform data analysis on Databases SQL server, Oracle, HIVE etc. Experience with GCP (particularly Airflow, Dataproc, Big Query) is an advantage Have experience with creating solutions which power AI/ML models and generative AI Ability to work independently on specialized assignments within the context of project deliverables Take ownership of providing solutions and tools that iteratively increase engineerin...
Posted 1 week ago
6.0 - 11.0 years
7 - 11 Lacs
hyderabad
Work from Office
We are looking for a skilled Big Data Engineer with 6 to 22 years of experience. The ideal candidate will have expertise in Pyspark, Azure Data Bricks, and workflows. This position is available as a contract role across Pan India. Roles and Responsibility Design and develop scalable big data systems using Pyspark and Azure Data Bricks. Implement and manage workflows for efficient data processing. Collaborate with cross-functional teams to integrate data from various sources. Develop and maintain large-scale data pipelines and architectures. Optimize system performance and troubleshoot issues. Ensure data quality and integrity through data validation and testing procedures. Job Requirements S...
Posted 1 week ago
6.0 - 9.0 years
6 - 10 Lacs
hyderabad, pune
Work from Office
We are looking for a skilled Big Data Developer with 6-9 years of experience to join our team in Hyderabad/Pune. The ideal candidate will have strong knowledge of Unix/BigData Scripting and a solid understanding of BigData (CDP/Hive) Environment. Roles and Responsibility Design, develop, and implement big data solutions using Hive and other relevant technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data processing systems with high performance and reliability. Troubleshoot and resolve complex technical issues related to big data systems. Participate in code reviews and contribute to the improvement of the o...
Posted 1 week ago
4.0 - 6.0 years
4 - 7 Lacs
bengaluru
Work from Office
Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.
Posted 1 week ago
6.0 - 11.0 years
5 - 8 Lacs
bengaluru
Work from Office
Skill:Data Engineer Notice Period:Immediate. Employment Type: Contract Job Description : Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture .
Posted 1 week ago
6.0 - 11.0 years
5 - 9 Lacs
hyderabad, pune, chennai
Work from Office
Job Details: Skill: Scala,hadoop,java, Job type: contract to hire Job Description: - 10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this inclu...
Posted 1 week ago
6.0 - 9.0 years
27 - 42 Lacs
hyderabad
Work from Office
Key skills: Spark, Scala Experience: 6 to 12 years Location: AIA Hyderabad Job description Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve pe...
Posted 1 week ago
7.0 - 11.0 years
9 - 13 Lacs
pune, chennai, bengaluru
Work from Office
We are looking for a skilled Tech Lead - Data Engineer with 7-11 years of experience to join our team in the Employment Firms/Recruitment Services Firms industry. Roles and Responsibility Design, develop, and implement data engineering solutions using various technologies. Lead a team of data engineers to ensure timely project delivery. Collaborate with cross-functional teams to identify business requirements and develop solutions. Develop and maintain large-scale data systems and architectures. Ensure data quality, integrity, and security. Stay updated with industry trends and emerging technologies. Job Requirements Strong knowledge of data engineering principles and practices. Experience w...
Posted 1 week ago
5.0 - 10.0 years
4 - 8 Lacs
bengaluru
Work from Office
Looking to onboard a skilled Hadoop Admin with 5-12 years of experience to join our team in Bangalore. The ideal candidate will have a strong background in Big Data technologies and programming languages such as Python, Scala. Roles and Responsibility Design, develop, and implement scalable data processing systems using Hadoop and Spark. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Troubleshoot and resolve complex technical issues related to data processing and storage. Ensure high availability and performance of data systems and applications. Participate in code reviews and contrib...
Posted 1 week ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
126846 Jobs | Dublin
Wipro
40828 Jobs | Bengaluru
EY
33625 Jobs | London
Accenture in India
30804 Jobs | Dublin 2
Uplers
24658 Jobs | Ahmedabad
Turing
23117 Jobs | San Francisco
IBM
20385 Jobs | Armonk
Infosys
19479 Jobs | Bangalore,Karnataka
Accenture services Pvt Ltd
19425 Jobs |
Capgemini
19370 Jobs | Paris,France