Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Title:EMR_Spark SME Experience:5-10 Years Location:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect – Associate/Professional AWS Certified Data Analytics – Specialty
Posted 3 weeks ago
9 - 11 years
37 - 40 Lacs
Ahmedabad, Bengaluru, Mumbai (All Areas)
Work from Office
Dear Candidate, We are hiring a Scala Developer to work on high-performance distributed systems, leveraging the power of functional and object-oriented paradigms. This role is perfect for engineers passionate about clean code, concurrency, and big data pipelines. Key Responsibilities: Build scalable backend services using Scala and the Play or Akka frameworks . Write concurrent and reactive code for high-throughput applications . Integrate with Kafka, Spark, or Hadoop for data processing. Ensure code quality through unit tests and property-based testing . Work with microservices, APIs, and cloud-native deployments. Required Skills & Qualifications: Proficient in Scala , with a strong grasp of functional programming Experience with Akka, Play, or Cats Familiarity with Big Data tools and RESTful API development Bonus: Experience with ZIO, Monix, or Slick Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 1 month ago
7 - 11 years
50 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
4 - 9 years
11 - 14 Lacs
Chennai, Pune, Bengaluru
Work from Office
About Position: As a Data Engineer, you will play a critical role in ensuring the reliability, availability, and performance of our systems and applications. Role: Data Engineer (Java+Spark) Location: All Over India (Pref-Pune) Experience: 4-10 years. Job Type: Full Time Employment What You'll Do: Technical Design and implement client requirements. Build data pipeline. Perform transformations using Java+spark in data bricks. Create automated workflows with the help of triggers, Scheduled Jobs in Airflow. Design and development of Airflow dags to orchestrate the data processing jobs. Design and development for ensuring timely notifications through email alerts in the event of job failures or critical system issues. Developing and implementing of code to write the logic data. Direct customer interaction to understand & gather requirements. Provision of technical input to customer for analysis and design work. Support to customer queries. Expertise You'll Bring: Proficiency in Java, with a good understanding of its ecosystems. Must have Experience in Data Engineering Java and Spark, Java is a must. Good knowledge of Spark Architecture Basic knowledge of Linux / Linux scripting is a must Transformation and aggregated data from multiple sources. Good Knowledge of Spark Architecture including Spark Core, Spark SQL, RDD, Data Set, and Data Frames. Good to have basic knowledge. Performance tuning using Optimization techniques, Caching Data in Memory, Broadcast etc. Good to have. Azure/Cloud DevOps concepts, CI/CD pipeline. Good to have. Good knowledge on the architecture of a Spark application. Good to have. Knows the concepts of deployment pipelines. Good to have. Hands-on experience with git bash. Must have Has worked with scrum methodology. Familiar with ceremonies and ways of working. Good to have. Able to communicate with the end clients via email with good email etiquette and via phone. Understanding business needs, identifying the right data sources, developing scalable and reliable data pipelines to ensure a smooth process Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 2 months ago
4 - 9 years
18 - 33 Lacs
Pune, Delhi NCR, Bengaluru
Hybrid
Experience in Spark 2.0 and above. Java 8 and above required.Design, code, test, document and implement application release projects as part of development team. Experience in programming/debugging used in business applications.
Posted 2 months ago
4 - 8 years
6 - 10 Lacs
Bengaluru
Work from Office
Overall, 4 to 8 years of experience inIT Industry. Min 4 years ofexperience working on Data Engineering using Azure Databricks, Synapse, ADF/Airflow. At least 3 Project experience in Building andmaintaining ETL / ELT pipelines for large data sets, complex data processing,transformations, business logics, cost monitoring & performanceoptimization, and feature engineering processes. Must Have skills: Extensive experience with Azure Databricks(ADB), Delta Lake, Azure Data Lake Storage (ADLS), Azure Data factory (ADF), AzureSQL Database (SQL DB), SQL, ELT / ETL Pipeline Development in Spark basedenvironment. Extensive Experience with SparkCore, PySpark, Python,SparkSQL, Scala, Azure Blob Storage. Experience in Real-Time Data Processing using Apache Kafka/EventHub/IoT, Structured Streaming and Stream analytics. Experience with Apache Airflow for ELTOrchestration. Experience withinfrastructure management, Infrastructure as code (e.g. Terraform) Experience with CI/CD, Version control tools likeGitHub, Azure DevOps. Experience with Version control tools, buildingCI/CD pipelines. Experience with Azure cloud platform. Good to have: Experience / Knowledge on Containerizations - Docker, Kubernetes Experience working in Agile Methodology Qualifications Qualifications - BE, MS, M.Tech or MCAAdditional Information Certifications: Azure Big Data, Databricks CertifiedAssociate
Posted 2 months ago
3 - 8 years
20 - 35 Lacs
Pune, Bengaluru, Gurgaon
Hybrid
Job Title: Data Engineer with Scala & Spark Job Description: As a Data Engineer with Scala & Spark, you will be responsible for designing, developing, and maintaining Scala applications. You will collaborate with cross-functional teams to define, design, and ship new features, as well as maintain and improve existing codebases. Your role will also involve troubleshooting, debugging, and optimizing application performance. You should have a strong understanding of functional programming concepts and be proficient in Scala, as well as have experience with related technologies. Responsibilities: 1. Design, implement, and maintain Scala applications. 2. Collaborate with cross-functional teams to define and develop new features. 3. Write clean, maintainable, and efficient code. 4. Troubleshoot, debug, and optimize application performance. 5. Contribute to the entire development lifecycle, including concept, design, build, deploy, test, release, and support. 6. Stay up-to-date with the latest industry trends and technologies to ensure the application's competitiveness. 7. Participate in code reviews and provide constructive feedback to team members. Skills and Qualifications: 1. Bachelor's degree in Computer Science, Engineering, or a related field. 2. Proven experience as a Scala Developer using Spark or similar role. 3. Strong understanding of functional programming concepts. 4. Proficiency in Scala programming language. Strong Analytical skills. 6. Familiarity with build tools such as Maven. 7. Knowledge of database systems (SQL and NoSQL) and experience with data modeling. 8. Understanding of distributed computing principles. 9. Familiarity with microservices architecture. 10. Experience with version control systems, preferably GIT. 11. Excellent problem-solving and communication skills. 12. Ability to work both independently and collaboratively in a team environment. 13. Knowledge of Agile development methodologies. Nice to Have: 1. Experience with cloud platforms such as AWS, Azure, or GCP. 2. Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes. 3. Familiarity with continuous integration and continuous deployment (CI/CD) pipelines. 4. Experience with other programming languages such as Java or Python.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2