Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
2 - 3 Lacs
Delhi, India
On-site
Responsible for building and maintaining Cloud based database systems of high availability and quality depending on clients Requirement Responsible for data migration from various systems to RDBMS PostgreSQLMySQL Responsible for writing ETL packages using Kafka Debezium Define users and enable data distribution to the right user in appropriate format and in a timely manner Use highspeed transaction recovery techniques and backup data Minimize database downtime and manage parameters to provide fast query responses Provide proactive and reactive data management support and training to users whenever required Monitor database performance implement changes and apply new patches and versions when required Responsible for building and maintaining Cloud based database systems of high availability and quality depending on Client Requirement Design and implement cloudbased database in accordance to on client requirement information needs and views Define users and enable data distribution to the right user in appropriate format and in a timely manner Use highspeed transaction recovery techniques and backup data Minimize database downtime and manage parameters to provide fast query responses Provide proactive and reactive data management support and training to users whenever required Determine enforce and document database policies procedures and standards Perform tests and evaluations regularly to ensure data security privacy and integrity
Posted 1 week ago
10.0 - 15.0 years
13 - 17 Lacs
Bengaluru
Work from Office
About the Role: We are looking for a Senior Engineering Manager with 10+ years of experience and 2 years of people management experience to help scale and modernize Myntra's data platform. The ideal candidate will have a strong background in building scalable data platforms using a combination of open-source technologies and enterprise solutions. The role demands deep technical expertise in data ingestion, processing, serving, and governance, with a strategic mindset to scale the platform 10x to meet the ever-growing data needs across the organization. This is a high-impact role requiring innovation, engineering excellence and system stability, with an opportunity to contribute to OSS projects and build data products leveraging available data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs across analytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka (Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging Databricks Spark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunities and build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability of the platform. Work with open-source communities and facilitate contributing to OSS projects aligned with Myntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in a cloud environment. Management Responsibilities: Technical Guidance : This role will play the engineering lead role for teams within Myntra Data Platform. You will provide technical leadership to a team of excellent data engineers; this requires that you have the technical depth to make complex design decisions and the hands-on ability to lead by example. Execution and Delivery : You will be expected to instill and follow good software development practices and ensure timely delivery of high-quality products. You should be familiar with agile practices as well as be able to adapt these to the needs of the business, with a constant focus on product quality. Team management : You will be responsible for hiring and mentoring your team; helping individuals grow in their careers, having constant dialogue about their aspirations and sharing prompt, clear and actionable feedback about performance. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or a related field. Experience: 10+ years of experience in building large-scale data platforms. 2+ years of people management experience. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-based environment. Experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governance practices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in data engineering. Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technical challenges. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and building best-in-class data products.
Posted 2 weeks ago
7 - 12 years
9 - 14 Lacs
Pune
Work from Office
About the Job: Red Hat is seeking skilled Java experts to join our team of Middleware Engineers supporting the JBoss Enterprise Middleware Suite. We're adding a senior level or midlevel engineer to assist customers with middleware code and architecture. As a Middleware Support Engineer, you will provide an important level of technical assistance to ensure that our highlyvalued customers get the most out of the product. This position extends well beyond product support as you will take on the role of trusted partner to our enterprise customers by offering developer to developer assistance for JBoss opensource middleware software. This means you will work closely with junior / senior engineers and developers to write new code, fix & improve code and provide highly technical solutions to support requests. Additionally, you will write patches, provide JBoss updates and contribute ideas through participation in an open and collaborative team environment. This opportunity is a tremendous chance to become part of a fastpaced, leading edgecompany that is changing the way software is developed, sold and supported. This is a highly skilled position that requires an engineer with initiative What will you do? Provide a high, detailed level of technical assistance to ensure that our customers get themost out of our JBoss products Engage and collaborate with open source developers around the world Offer developertodeveloper assistance for the JBoss Enterprise Middleware Suite Work alongside inhouse developers, write new code, fix & improve code, provide highly technical solutions and contribute to the JBoss & open source software communities Act as the technical point person for a technology of your choosing within the JBoss Product Suite Develop patches and feature requests to provide upstream to the community anddownstream to customers Advise customers on JEE architectural design decisions Work with other open source projects to help customers integrate Red Hat products withtheir applications. What will you bring? 7+ years of professional J2EE/JEE platform experience Deep experience with one or several JEE/App Server technologies such as JMS, WebServices, SOAP, REST, Tomcat or Datasources, FuseSource, Camel, AMQ, Karaf,Debezium and Kafka Experience working with Springboot framework Must have handson experience with application server technologies JBoss/WebSphere/WebLogic A solid understanding of Java Programming API's & popular Java frameworks is preferred Experience with databases and SQL Must have strong troubleshooting/debugging skills and a passion for problem solving and investigation Ability and willingness to learn new open source middleware technologies Very clear and effective English communications skills (Verbal & Written) BE/BS/BA degree in engineering or computer science is preferred; equivalent experience within the enterprise IT sector will also be considered Following are considered as plus: Basic knowledge of SSO technologies like- RHSSO, Keycloak, Kerberos and SAML Knowledge on LDAP servers like Active Directory, Open Ldap, RHDS (Red Hat Directory Server)etc is an advantage. Experience with cryptography including PKI, SSL/TLS, and key management.
Posted 2 months ago
3 - 6 years
5 - 8 Lacs
Gurgaon
Work from Office
We are looking for Data Engineer who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Design is an iterative process, whether for UX, services or infrastructure. Our goal is to drive up modernizing and improving application capabilities. Job Responsibilities As a Data Engineer, you will be joining our Data Engineering & Modernization team transforming our global financial network and improving our data products and services we provide to our internal customers. This team will leverage cutting edge data engineering & modernization techniques to develop scalable solutions for managing data and building data products. In this role, you are expected to Involve from inception of projects to understand requirements, architect, develop, deploy, and maintain data. Work in a multi-disciplinary, agile squad which involves partnering with program and product managers to expand product offering based on business demands. Focus on speed to market and getting data products and services in the hands of our stakeholders and passion to transform financial industry is key to the success of this role. Maintain a positive and collaborative working relationship with teams within the NCR Atleos technology organization, as well as with wider business. Creative and inventive problem-solving skills for reduced turnaround times are required, and valued, and will be a major part of the job. And Ideal candidate would have: BA/BS in Computer Science or equivalent practical experience Experience applying machine learning and AI techniques on modernizing data and reporting use cases. Overall 3+ years of experience on Data Analytics or Data Warehousing projects. At least 2+ years of Cloud experience on AWS/Azure/GCP, preferred Azure. Microsoft Azure, ADF, Synapse. Programming in Python, PySpark, with experience using pandas, ml libraries etc. Data streaming with Flink/Spark structured streaming. Open-source orchestration frameworks like DBT, ADF, AirFlow Open-source data ingestion frameworks like Airbyte, Debezium Experience migrating from traditional on-prem OLTP/OLAP databases to cloud native DBaaS and/or NoSQL databases like Cassandra, Neo4J, Mongo DB etc. Deep expertise operating in a cloud environment, and with cloud native databases like Cosmos DB, Couchbase etc. Proficiency in various data modelling techniques, such as ER, Hierarchical, Relational, or NoSQL modelling. Excellent design, development, and tuning experience with SQL (OLTP and OLAP) and NoSQL databases. Experience with modern database DevOps tools like Liquibase or Redgate Flyway or DBmaestro. Deep understanding of data security and compliance, and related architecture Deep understanding and strong administrative experience with distributed data processing frameworks such as Hadoop, Spark, and others Experience with programming languages like Python, Java, Scala, and machine learning libraries. Experience with dev ops tools like Git, Maven, Jenkins, GitHub Actions, Azure DevOps Experience with Agile development concepts and related tools. Ability to tune and trouble shoot performance issues across the codebase and database queries. Excellent problem-solving skills, with the ability to think critically and creatively to develop innovative data solutions. Excellent written and strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. Passion for learning with a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Additional Skills: Leverage machine learning and AI techniques on operationalizing data pipelines and building data products. Provide data services using APIs. Containerization data products and services using Kubernetes and/or Docker.
Posted 2 months ago
12 - 20 years
27 - 42 Lacs
Trivandrum, Bengaluru, Hyderabad
Work from Office
Hiring For AWS Big Data Architect who can Join Immediately with one of our client. Role : Big Data Architect / AWS Big Data Architect Experience : 12+ Years Locations : Hyderabad , Bangalore , Gurugram, Kochi , Trivandrum Notice Period : Immediate Joiners Shift Timings : overlap with UK timings ( 2-11 PM IST) Notice Period : Immediate Joiners / Serving Notice with in 30 Days Required Skills & Qualifications : 12+ years of experience in Big Data architecture and engineering. Strong expertise in AWS (DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc.). Hands-on experience with Debezium and Kafka for real-time data streaming and synchronization. Proficiency in Spark optimization for batch processing improvements. Strong SQL and Oracle query optimization experience. Expertise in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena, etc.). Experience in CI/CD automation and integrating AWS services with DevOps pipelines. Strong problem-solving skills and ability to work in an Agile environment. Preferred Skills (Good to Have): • Experience with Dremio to Athena migrations. • Exposure to cloud-native DR solutions on AWS. • Strong analytical skills to document and implement performance improvements More details to Contact to me : 9000336401 Mail ID :chandana.n@kksoftwareassociates.com For More Job Alerts Please Do Follow : https://lnkd.in/gHMuPUXW
Posted 3 months ago
13 - 20 years
25 - 40 Lacs
Bengaluru, Hyderabad, Gurgaon
Work from Office
Role & responsibilities We are seeking a highly skilled Big Data Architect with deep expertise in AWS, Kafka, Debezium, and Spark. This role offers an exciting opportunity to be a critical player in optimizing query performance, data synchronization, disaster recovery (DR) solutions, and simplifying reporting workflows. The ideal candidate will have hands-on experience with a broad range of AWS-native services, big data processing frameworks, and CI/CD integrations to drive impactful system and performance enhancements. Required Skills & Qualifications: 12+ years of experience in Big Data architecture and engineering, with a proven track record of successful large-scale data solutions. Extensive expertise in AWS services such as DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc. Hands-on experience with Debezium and Kafka for real-time data streaming, change data capture (CDC), and ensuring seamless data synchronization across systems. Expertise in Spark optimization, particularly for batch processing improvements, including reducing job execution times and resource utilization. Strong SQL and Oracle query optimization skills, with a deep understanding of database performance tuning. Experience with Big Data frameworks like Hadoop, Spark, Hive, Presto, and Athena. Proven background in CI/CD automation and integrating AWS services with DevOps pipelines. Exceptional problem-solving abilities and the capacity to work effectively in an Agile environment. Skills Data Architecture, AWS, Spark, SQL Interested candidates, please share your updated resumes to saideep.p@kksoftwareassociates.com or contact 9390510069.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2