Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description: Job description Skills AWS EMR Key Responsibilities: A day in the life of an Infoscion As part of the Infosys delivery team your primary role would be to interface with the client for quality assurance issue resolution and ensuring high customer satisfaction You will understand requirements create and review designs validate the architecture and ensure high levels of service offerings to clients in the technology domain You will participate in project estimation provide inputs for solution delivery conduct technical risk planning perform code reviews and unit test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge management and adherence to the organizational guidelines and processes You would be a key contributor to building efficient programs systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology Big Data Data Processing Map Reduce Preferred Skills: Technology->Big Data - Data Processing->Map Reduce
Posted 3 weeks ago
7 - 11 years
50 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
5 - 10 years
20 - 30 Lacs
Pune
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Big Data Developers/Lead Work Location: Bangalore (CV Raman Nagar), Pune, Hyderabad, Gurugram, Noida Experience : 5+ Years Technical Skill: BigData, AWS, redshift, snowflake, Spark, Python, Scala, & SQL Roles and Responsibilities 5+ Years of Big Data development experience with minimum 2 years hands-on in Java Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field. If the above profile suits you then request, to share your updated profile with below HR details: Full Name- Email Id- Phone No- Total years of experience - Relevant experience Bigdata- Relevant experience AWS- Relevant experience in Snowflake- Relevant experience in Redshift- Rating on SQL (out of5 )- Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability; Pls mention the Date and Time Revert with your confirmation
Posted 3 months ago
5 - 10 years
20 - 30 Lacs
Bengaluru
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Big Data Developers/Lead Work Location: Bangalore (CV Raman Nagar), Pune, Hyderabad, Gurugram, Noida Experience : 5+ Years Technical Skill: BigData, AWS, redshift, snowflake, Spark, Python, Scala, & SQL Roles and Responsibilities 5+ Years of Big Data development experience with minimum 2 years hands-on in Java Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field. If the above profile suits you then request, to share your updated profile with below HR details: Full Name- Email Id- Phone No- Total years of experience - Relevant experience Bigdata- Relevant experience AWS- Relevant experience in Snowflake- Relevant experience in Redshift- Rating on SQL (out of5 )- Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability; Pls mention the Date and Time Revert with your confirmation
Posted 3 months ago
5 - 10 years
20 - 30 Lacs
Hyderabad
Hybrid
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Big Data Developers/Lead Work Location: Bangalore (CV Raman Nagar), Pune, Hyderabad, Gurugram, Noida Experience : 5+ Years Technical Skill: BigData, AWS, redshift, snowflake, Spark, Python, Scala, & SQL Roles and Responsibilities 5+ Years of Big Data development experience with minimum 2 years hands-on in Java Hands-on experience of API development (from application / software engineering perspective) Advanced level experience (5+ years) building Real time streaming and batch systems using Apache Spark and Kafka in Java programming language. Experience with any NoSQL stores (HBase, Cassandra, MongoDB, Influx DB). Solid understanding of secure application development methodologies Experience in developing microservices using spring framework is a plus Capable of working as an individual contributor and within team too Design, build & maintain efficient, reusable & reliable code Experience in Hadoop based technologies Java, Hive, Pig, Map Reduce, Spark, python/Scala , Azure Should be able to understand complex architectures and be comfortable working with multiple teams Excellent communication, client engagement and client management skills are strongly preferred. Minimum Bachelors degree in Computer Science, Engineering, Business Information Systems, or related field. If the above profile suits you then request, to share your updated profile with below HR details: Full Name- Email Id- Phone No- Total years of experience - Relevant experience Bigdata- Relevant experience AWS- Relevant experience in Snowflake- Relevant experience in Redshift- Rating on SQL (out of5 )- Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability; Pls mention the Date and Time Revert with your confirmation
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2