Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 1 month ago
5 - 8 years
0 Lacs
Pune, Maharashtra, India
Hybrid
Key Result Areas And Activities ETL Pipeline Development and Maintenance Design, develop, and maintain ETL pipelines using Cloudera tools such as Apache NiFi, Apache Flume, and Apache Spark. Create and maintain comprehensive documentation for data pipelines, configurations, and processes. Data Integration and Processing Integrate and process data from diverse sources including relational databases, NoSQL databases, and external APIs. Performance Optimization Optimize performance and scalability of Hadoop components (HDFS, YARN, MapReduce, Hive, Spark) to ensure efficient data processing. Identify and resolve issues related to data pipelines, system performance, and data integrity. Data Quality and Transformation Implement data quality checks and manage data transformation processes to ensure accuracy and consistency. Data Security and Compliance Apply data security measures and ensure compliance with data governance policies and regulatory requirements. Essential Skills Proficiency in Cloudera Data Platform (CDP) - Cloudera Data Engineering. Proven track record of successful data lake implementations and pipeline development. Knowledge of data lakehouse architectures and their implementation. Hands-on experience with Apache Spark and Apache Airflow within the Cloudera ecosystem. Proficiency in programming languages such as Python, Java, Scala, and Shell. Exposure to containerization technologies (e.g., Docker, Kubernetes) and system-level understanding of data structures, algorithms, distributed storage, and compute. Desirable Skills Experience with other CDP services like Dataflow, Stream Processing Familiarity with cloud environments such as AWS, Azure, or Google Cloud Platform Understanding of data governance and data quality principles CCP Data Engineer Certified Qualifications 7+ years of experience in Cloudera/Hadoop/Big Data engineering or related roles Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Qualities Can influence and implement change; demonstrates confidence, strength of conviction and sound decisions. Believes in head-on dealing with a problem; approaches in logical and systematic manner; is persistent and patient; can independently tackle the problem, is not over-critical of the factors that led to a problem and is practical about it; follow up with developers on related issues. Able to consult, write, and present persuasively. Able to work in a self-organized and cross-functional team. Able to iterate based on new information, peer reviews, and feedback. Able to work seamlessly with clients across multiple geographies. Research focused mindset. Proficiency in English (read/write/speak) and communication over email. Excellent analytical, presentation, reporting, documentation, and interactive skills.
Posted 1 month ago
6 - 11 years
19 - 27 Lacs
Haryana
Work from Office
About Company Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience
Posted 1 month ago
4 - 8 years
12 - 22 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: Big Data Developer Experience Required :4 to 8 yrs Work Location : Bangalore/Chennai/Pune/Delhi/Hyderabad/Kochi Required Skills, Spark and Scala Interested candidates can send resumes to nandhini.s@spstaffing.in
Posted 1 month ago
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 1 month ago
4 years
0 Lacs
Hyderabad, Telangana, India
Description Do you want to be a leader in the team that takes Transportation and Retail models to the next generation? Do you have a solid analytical thinking, metrics driven decision making and want to solve problems with solutions that will meet the growing worldwide need? Then Transportation is the team for you. We are looking for top notch Data Engineers to be part of our world class Business Intelligence for Transportation team. 4-7 years of experience performing quantitative analysis, preferably for an Internet or Technology company Strong experience in Data Warehouse and Business Intelligence application development Data Analysis: Understand business processes, logical data models and relational database implementations Expert knowledge in SQL. Optimize complex queries. Basic understanding of statistical analysis. Experience in testing design and measurement. Able to execute research projects, and generate practical results and recommendations Proven track record of working on complex modular projects, and assuming a leading role in such projects Highly motivated, self-driven, capable of defining own design and test scenarios Experience with scripting languages, i.e. Perl, Python etc. preferred BS/MS degree in Computer Science Evaluate and implement various big-data technologies and solutions (Redshift, Hive/EMR, Tez, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion. Experience with large scale data processing, data structure optimization and scalability of algorithms a plus Key job responsibilities Responsible for designing, building and maintaining complex data solutions for Amazon's Operations businesses Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.) Knows about recent advances in distributed systems (e.g., MapReduce, MPP Architectures, External Partitioning) Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient Makes enhancements that improve team’s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps) Owns the data quality of important datasets and any new changes/enhancements Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2941103
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2