Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
8 - 10 Lacs
Pune
Work from Office
Key Responsibilities: Design and develop scalable backend services using Go (Golang) Collaborate with cross-functional teams to understand project requirements Write efficient, maintainable, and reusable code Debug and resolve production issues as they arise Ensure code quality through unit and integration testing Participate in code reviews and technical discussions Required Skills: Minimum 4+ years of experience in software development At least 2+ years of strong hands-on experience with Golang Solid understanding of REST APIs and microservices architecture Experience with relational and NoSQL databases Familiarity with Docker, Kubernetes, and CI/CD pipelines is a plus Excellent problem-solving and communication skills
Posted 2 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for developing, and maintaining software applications, components, and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role requires a experience in and a deep understanding of both front and back-end development. The Full Stack Software Engineer will work closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. The Full Stack Software Engineer will also contribute to design discussions and provide guidance on technical feasibility and best standards. Roles & Responsibilities: Develop complex software projects from conception to deployment, including delivery scope, risk, and timeline. Conduct code reviews to ensure code quality and adherence to best practices. Contribute to both front-end and back-end development using cloud technology. Provide ongoing support and maintenance for design system and applications, ensuring reliability, reuse and scalability while meeting accessibility and best standards. Develop innovative solutions using generative AI technologies. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Identify and resolve technical challenges, software bugs and performance issues effectively. Stay updated with the latest trends and advancements. Analyze and understand the functional and technical requirements of applications, solutions, and systems and translate them into software architecture and design specifications. Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software. Work closely with cross-functional teams, including product management, stakeholders, design, and QA, to deliver high-quality software on time. Maintain detailed documentation of software designs, code, and development processes. Work on integrating with other systems and platforms to ensure seamless data flow and functionality. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of experience in Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT or related field experience OR Diploma and 7 to 9 years of experience in Computer Science, IT or related field experience Must-Have Skills: Hands-on experience with various cloud services, understanding the pros and cons of various cloud services in well-architected cloud design principles. Experience with developing and maintaining design systems across teams. Hands-on experience with Full Stack software development. Proficient in programming languages such as JavaScript, Python, SQL/NoSQL. Familiarity with frameworks such as React JS visualization libraries. Strong problem-solving and analytical skills; ability to learn quickly; excellent communication and interpersonal skills. Experience with API integration, serverless, microservices architecture. Experience in SQL/NoSQL databases, vector databases for large language models. Experience with website development, understanding of website localization processes, which involve adapting content to fit cultural and linguistic contexts. Preferred Qualifications: Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk). Experience with data processing tools like Hadoop, Spark, or similar. Experience with popular large language models. Experience with Langchain or llamaIndex framework for language models; experience with prompt engineering, model fine-tuning. Professional Certifications: Relevant certifications such as CISSP, AWS Developer certification, CompTIA Network+, or MCSE (preferred). Any SAFe Agile certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 2 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
Mumbai, Maharastra
Work from Office
Grade Level (for internal use) : - 10 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform Responsibilities: Design and implement innovative software solutions to enhance S&P Ratings' cloud-based data platforms. Mentor a team of engineers fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 7+ years of development experience in enterprise products, modern web development technologies Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor Benefits: Health & Wellness: Health care coverage designed for the mind and body. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.
Posted 2 weeks ago
4.0 - 6.0 years
4 - 6 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Job Title: Data Platform Developer Key Responsibilities As a Data Platform Developer, you will: Solution Design & Development: Design, build, and unit test applications on the Spark framework using Python (PySpark). Translate requirements into full-fledged and scalable PySpark-based applications for both batch and streaming requirements. Data Pipeline Development: Develop and execute data pipeline testing processes, and validate business rules and policies. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, and HDFS compression codecs. Big Data Ecosystem Management: Apply in-depth knowledge of various Hadoop and NoSQL databases. Automation & CI/CD: Create and maintain integration and regression testing frameworks on Jenkins, integrated with Bitbucket and/or GIT repositories. Agile Collaboration: Participate actively in the Agile development process, documenting and communicating issues and bugs related to data standards in scrum meetings. Work collaboratively with both onsite and offshore teams. Technical Documentation: Develop and review technical documentation for delivered artifacts. Problem Solving & Triage: Solve complex data-driven scenarios and triage defects and production issues. Deployment & Release: Participate in code release and production deployment. Continuous Learning: Demonstrate an ability to learn-unlearn-relearn concepts with an open and analytical mindset, and be comfortable tackling new challenges and ways of working. Mandatory Skills & Experience Technical Proficiency: PySpark Expertise: Extensive experience in design, build, and deployment of PySpark-based applications (minimum 3 years). Hadoop Ecosystem: Minimum 3 years of experience in HIVE, YARN, HDFS . Spark: Ability to design, build, and unit test applications on the Spark framework on Python . Scripting & Databases: Proficiency in Unix shell scripting and experience with RDBMS . Strong hands-on experience writing complex SQL queries , exporting, and importing large amounts of data using utilities. NoSQL Databases: In-depth knowledge of various NoSQL databases . CI/CD Tools: Experience in creating and maintaining integration and regression testing frameworks on Jenkins integrated with Bitbucket and/or GIT repositories . Code Quality: Ability to build abstracted, modularized, and reusable code components. Big Data Environment: Expertise in handling complex large-scale Big Data environments (preferably 20TB+). Experience & Qualifications: Minimum 3 years of extensive experience in design, build, and deployment of PySpark-based applications. BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university (Preferred). Prior experience with ETL tools, preferably Informatica PowerCenter , is advantageous. Essential Professional Skills Problem Solving: Ability to solve complex data-driven scenarios and triage defects and production issues. Adaptability & Learning: Able to quickly adapt and learn, comfortable tackling new challenges and new ways of working, and ready to move from traditional methods to agile ones. Communication & Collaboration: Excellent communication skills, strong collaboration and coordination across various teams, and comfortable challenging peers and leadership. Customer Centricity: Good Customer Centricity, strong Target & High Solution Orientation. Initiative: Able to jump into an ambiguous situation and take the lead on resolution. Agile Mindset: Comfortable moving from traditional methods and adapting to agile ones. Proactiveness: Can prove self quickly and decisively.
Posted 2 weeks ago
6.0 - 11.0 years
4 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Description We are seeking a skilled Azure Data Engineer with 6-11 years of experience to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining data solutions on the Azure platform. You will work closely with cross-functional teams to ensure the efficient processing and management of data, while also driving data-driven decision-making within the organization. If you are passionate about data and have a strong technical background in Azure services, we would love to hear from you. Responsibilities Design and implement data solutions using Azure services such as Azure Data Factory, Azure Databricks, and Azure SQL Database. Develop and maintain data pipelines for data ingestion, transformation, and storage. Ensure data quality and integrity by implementing data validation and cleansing processes. Collaborate with data scientists and analysts to understand data requirements and provide data access. Optimize performance of data processing and storage solutions in Azure. Monitor and troubleshoot data workflows and pipelines to ensure reliability and efficiency. Implement security and compliance measures for data handling and storage. Document data architecture and processes for future reference and onboarding. Skills and Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 6-11 years of experience in data engineering or a related field. Strong experience with Azure Data services (Azure Data Factory, Azure Databricks, Azure Synapse Analytics). Proficiency in SQL and experience with relational databases (e.g., Azure SQL Database, SQL Server). Knowledge of programming languages such as Python or Scala for data processing and ETL tasks. Experience with data modeling and database design principles. Familiarity with big data technologies (e.g., Apache Spark, Hadoop) and data warehousing concepts. Understanding of data governance and best practices in data security. Experience with CI/CD processes and DevOps practices for data solutions.
Posted 2 weeks ago
3.0 - 6.0 years
2 - 5 Lacs
Chennai, Tamil Nadu, India
On-site
The role involves developing scalable microservices with Spring Boot and Java 8, integrating NoSQL databases, and implementing messaging systems like RabbitMQ or Kafka. You'll enhance observability with Grafana and Prometheus, optimize performance using Hazelcast and Eureka, and utilize Docker and Kubernetes for deployment. Additionally, you'll focus on distributed systems best practices and fault tolerance. HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Develop robust and scalable microservices using Spring Boot and Java 8 advanced features. Integrate and maintain MongoDB or other NoSQL databases for efficient data management. Implement messaging systems (RabbitMQ, Kafka, VerneMQ) to enable seamless communication between services. Contribute to improving system observability by configuring Grafana and Prometheus for real-time monitoring. Enhance application performance and fault tolerance by utilizing Hazelcast and Eureka. Troubleshoot JVM performance issues and optimize resource utilization. Leverage Docker and Kubernetes for containerization, deployment, and orchestration of applications. Learn best practices for distributed systems, fault tolerance, and resilience at scale. KEY SKILLS AND EXPERIENCE You have : Graduate or Postgraduate in Engineering stream with 3+ years of relevant experience in Java 8. Expertise in building enterprise-grade applications using Spring Boot and applying advanced Java 8 features like streams, lambdas, and the Java time API. Experience with MongoDB or similar NoSQL databases for handling large-scale, unstructured data. Practical experience in integrating and working with messaging systems to enable asynchronous communication and scalability. It would be nice if you also had: Knowledge in setting up and configuring monitoring tools. Troubleshooting and optimization of JVM performance, including garbage collection tuning and memory management. Knowledge in Docker for containerization and Kubernetes for orchestration, ensuring efficient deployment and scaling.
Posted 2 weeks ago
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad, Bengaluru
Work from Office
**URGENT hiring** Note: This is work from office opportunity. Apply only of your okay with it Must have Skills: MySQL, PostgreSQL, NoSQL, and Redshift Location: Bangalore/Hyderabad Years of experience: 5+ Years Notice period: immediate to 15 days Role Overview: The Database Engineer (DBE) is responsible for the design, implementation, maintenance, and optimization of databases to ensure high availability, security, and performance. The role involves working with relational and NoSQL databases, managing backups, monitoring performance, and ensuring data integrity. Key Responsibilities: Database Administration & Maintenance • Install, configure, and maintain database management systems (DBMS) such as MySQL, PostgreSQL, SQL Server, Oracle, or MongoDB. • Ensure database security, backup, and disaster recovery strategies are in place. • Monitor database performance and optimize queries, indexing, and storage. • Apply patches, updates, and upgrades to ensure system stability and security. Database Design & Development • Design and implement database schemas, tables, and relationships based on business requirements. • Develop and optimize stored procedures, functions, and triggers. • Implement data partitioning, replication, and sharding strategies for scalability. Performance Tuning & Optimization • Analyze slow queries and optimize database performance using indexing, caching, and tuning techniques. • Conduct database capacity planning and resource allocation. • Monitor and troubleshoot database-related issues, ensuring minimal downtime. Security & Compliance • Implement role-based access control (RBAC) and manage user permissions. • Ensure databases comply with security policies, including encryption, auditing, and GDPR/HIPAA regulations. • Conduct regular security assessments and vulnerability scans. Collaboration & Automation • Work closely with developers, system administrators, and DevOps teams to integrate databases with applications. • Automate database management tasks using scripts and tools. • Document database configurations, processes, and best practices. Required Skills & Qualifications: • Experience: 4+ years of experience in database administration, engineering, or related fields. • Education: Bachelors or Master’s degree in Computer Science, Information Technology, or related disciplines. • Technical Skills: • Strong knowledge of SQL and database optimization techniques. • Hands-on experience with at least one major RDBMS (MySQL, PostgreSQL, SQL Server, Oracle). • Experience with NoSQL databases (MongoDB, Cassandra, DynamoDB) is a plus. • Proficiency in database backup, recovery, and high availability solutions (Replication, Clustering, Mirroring). • Familiarity with scripting languages (Python, Bash, PowerShell) for automation. • Experience with cloud-based database solutions (AWS RDS, Azure SQL, Google Cloud Spanner). Preferred Qualifications: • Experience with database migration and cloud transformation projects. • Knowledge of CI/CD pipelines and DevOps methodologies for database management. • Familiarity with big data technologies like Hadoop, Spark, or Elasticsearch.
Posted 2 weeks ago
3.0 - 5.0 years
8 - 12 Lacs
Noida
Work from Office
About the Role: Grade Level (for internal use): 09 The Role: Platform Engineer Department overview PVR DevOps is a global team that provides specialized technical builds across a suite of products. DevOps members work closely with the Development, Testing and Client Services teams to build and develop applications using the latest technologies to ensure the highest availability and resilience of all services. Our work helps ensure that PVR continues to provide high quality service and maintain client satisfaction. Position Summary S&P Global is seeking a highly motivated engineer to join our PVR DevOps team in Noida. DevOps is a rapidly growing team at the heart of ensuring the availability and correct operation of our valuations, market and trade data applications. The team prides itself on its flexibility and technical diversity to maintain service availability and contribute improvements through design and development. Duties & accountabilities The role of Principal DevOps Engineer is primarily focused on building functional systems that improve our customer experience. Responsibilities include: Creating infrastructure and environments to support our platforms and applications using Terraform and related technologies to ensure all our environments are controlled and consistent. Implementing DevOps technologies and processes, e.gcontainerisation, CI/CD, infrastructure as code, metrics, monitoring etc Automating always Supporting, monitoring, maintaining and improving our infrastructure and the live running of our applications Maintaining the health of cloud accounts for security, cost and best practices Providing assistance to other functional areas such as development, test and client services. Knowledge, Skills & Experience Strong background of At least 3 to 5 years of experience in Linux/Unix Administration in IaaS / PaaS / SaaS models Deployment, maintenance and support of enterprise applications into AWS including (but not limited to) Route53, ELB, VPC, EC2, S3, ECS, SQS Good understanding of Terraform and similar Infrastructure as Code technologies Strong experience with SQL and NoSQL databases such MySQL, PostgreSQL, DB/2, MongoDB, DynamoDB Experience with automation/configuration management using toolsets such as Chef, Puppet or equivalent Experience of enterprise systems deployed as micro-services through code pipelines utilizing containerization (Docker) Working knowledge, understanding and ability to write scripts using languages including Bash, Python and an ability to understand Java, JavaScript and PHP Personal competencies Personal Impact Confident individual able to represent the team at various levels Strong analytical and problem-solving skills Demonstrated ability to work independently with minimal supervision Highly organised with very good attention to detail Takes ownership of issues and drives through the resolution. Flexible and willing to adapt to changing situations in a fast moving environment Communication Demonstrates a global mindset, respects cultural differences and is open to new ideas and approaches Able to build relationships with all teams, identifying and focusing on their needs Ability to communicate effectively at business and technical level is essential. Experience working in a global-team Teamwork An effective team player and strong collaborator across technology and all relevant areas of the business. Enthusiastic with a drive to succeed. Thrives in a pressurized environment with a can do attitude Must be able to work under own initiative
Posted 2 weeks ago
8.0 - 13.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Req ID: 327855 We are currently seeking a Python Django Microservices Lead to join our team in Bangalore, Karntaka (IN-KA), India (IN). "Job DutiesResponsibilities: Lead the development of backend systems using Django. Design and implement scalable and secure APIs. Integrate Azure Cloud services for application deployment and management. Utilize Azure Databricks for big data processing and analytics. Implement data processing pipelines using PySpark. Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions. Conduct code reviews and ensure adherence to best practices. Mentor and guide junior developers. Optimize database performance and manage data storage solutions. Ensure high performance and security standards for applications. Participate in architecture design and technical decision-making. Minimum Skills RequiredQualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in backend development. 8+ years of experience with Django. Proven experience with Azure Cloud services. Experience with Azure Databricks and PySpark. Strong understanding of RESTful APIs and web services. Excellent communication and problem-solving skills. Familiarity with Agile methodologies. Experience with database management (SQL and NoSQL). Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies"
Posted 2 weeks ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 2 weeks ago
12.0 - 15.0 years
22 - 25 Lacs
Hyderabad, Chennai
Work from Office
Full-stack developer with expertise in Java, Spring Boot, React JS, Kafka, NoSQL (Cosmos, Cassandra), Azure Cloud, AKS, and Azure SQL. Domain focus: Retail-CPG, Logistics, and Supply Chain. Mail:kowsalya.k@srsinfoway.com
Posted 2 weeks ago
4.0 - 8.0 years
4 - 8 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4+ years of experience in data modelling, data architecture. Proficiency in data modelling tools Erwin, IBM Infosphere Data Architect and database management systems Familiarity with different data models like relational, dimensional and NoSQL databases. Understanding of business processes and how data supports business decision making. Strong understanding of database design principles, data warehousing concepts, and data governance practices Preferred technical and professional experience Excellent analytical and problem-solving skills with a keen attention to detail. Ability to work collaboratively in a team environment and manage multiple projects simultaneously. Knowledge of programming languages such as SQL
Posted 2 weeks ago
5.0 - 10.0 years
10 - 12 Lacs
Nagpur, Maharashtra, India
On-site
Job Description Project Role : Database Administrator Project Role Description : Administer, develop, test, or demonstrate databases. Perform many related database functions across one or more teams or clients, including designing, implementing and maintaining new databases, backup/recovery and configuration management. Install database management systems (DBMS) and provide input for modification of procedures and documentation used for problem resolution and day-to-day maintenance. Must have skills : IFS Solutions Administration Good to have skills : Oracle Applications DBA Minimum7.5year(s) of experience is required Educational Qualification :15 years full time education Job Summary: As a Database Administrator, you will be responsible for administering, developing, testing, and demonstrating databases. You will handle a variety of database functions across multiple teams or clients, including designing, implementing, and maintaining new databases, managing backups and recovery, and overseeing configuration management. You will also install Database Management Systems (DBMS) and contribute to the modification of procedures and documentation used for problem resolution and day-to-day maintenance. Roles & Responsibilities: Act as a Subject Matter Expert (SME) and collaborate with the team to ensure high performance. Take responsibility for team decisions and contribute to key decisions across multiple teams. Provide solutions to problems within your team and across other teams. Develop and implement database security policies and procedures. Optimize database performance using tuning and indexing strategies. Professional & Technical Skills: Must-Have Skills: Proficiency in IFS Solutions Administration. Good-to-Have Skills: Experience with Oracle Applications DBA. Strong understanding of database management systems. Knowledge of database design and implementation best practices. Experience with database backup and recovery procedures. Additional Information: A minimum of 7.5 years of experience in IFS Solutions Administration is required. This position is based in Nagpur. A 15-year full-time education is required.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 327863 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer Senior Consultant to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Duties: Responsibilities: Lead the development of backend systems using Django. Design and implement scalable and secure APIs. Integrate Azure Cloud services for application deployment and management. Utilize Azure Databricks for big data processing and analytics. Implement data processing pipelines using PySpark. Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions. Conduct code reviews and ensure adherence to best practices. Mentor and guide junior developers. Optimize database performance and manage data storage solutions. Ensure high performance and security standards for applications. Participate in architecture design and technical decision-making. Minimum Skills Required: Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in backend development. 8+ years of experience with Django. Proven experience with Azure Cloud services. Experience with Azure Databricks and PySpark. Strong understanding of RESTful APIs and web services. Excellent communication and problem-solving skills. Familiarity with Agile methodologies. Experience with database management (SQL and NoSQL). Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 327859 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer Advisor to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Duties: Responsibilities: Lead the development of backend systems using Django. Design and implement scalable and secure APIs. Integrate Azure Cloud services for application deployment and management. Utilize Azure Databricks for big data processing and analytics. Implement data processing pipelines using PySpark. Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions. Conduct code reviews and ensure adherence to best practices. Mentor and guide junior developers. Optimize database performance and manage data storage solutions. Ensure high performance and security standards for applications. Participate in architecture design and technical decision-making. Minimum Skills Required: Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 8+ years of experience in backend development. 8+ years of experience with Django. Proven experience with Azure Cloud services. Experience with Azure Databricks and PySpark. Strong understanding of RESTful APIs and web services. Excellent communication and problem-solving skills. Familiarity with Agile methodologies. Experience with database management (SQL and NoSQL). Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
4.0 - 9.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Senior Engineer - Cloud Services and Software. 1. 4+ Years of strong experience in Microsoft Dot Net along with AWS cloud experience. 2. At least 1+ year in Dot Net Core. 3. Should have strong experience in SQL (MySQL, SQL SERVER) and NoSQL databases 4. Good to have knowledge of Design Patterns, SOLID principles, & CLEAN architecture. 5. Good to have experience in application migration from dot net to dot net core 6. Good to have experience in CI/CD like sourceTree 7. Good to have experience in JIRA/Agile 8. Should be ready to learn new technologies 9. Strong analytical and problem solving skills 10. Good communication skills and client handling skills
Posted 2 weeks ago
4.0 - 8.0 years
4 - 9 Lacs
Bengaluru
Work from Office
As a key member of our dynamic team, you will play a vital role in crafting exceptional software experiences. Your responsibilities will encompass the design and implementation of innovative features, fine-tuning and sustaining existing code for optimal performance, and guaranteeing top-notch quality through rigorous testing and debugging. Collaboration is at the heart of what we do, and you’ll be working closely with fellow developers, designers, and product managers to ensure our software aligns seamlessly with user expectations. Required education Bachelor's Degree Required technical and professional expertise Work with Hiring Manager to ID up to 5 bullets max. You can get inspired by these few examples below. Software Development Expertise: Strong background in software development, demonstrating expertise in programming languages such as Java, Python, or C++. Cloud Technology Proficiency: Experience with cloud-based technologies, showcasing familiarity with modern cloud ecosystems and tools. NoSQL Database Knowledge: Proficiency in NoSQL databases, particularly experience with technologies like Cloudant, adding value to data management practices. Self-Starter Mindest: A self-starter with a proactive mindset, able to initiate and drive projects independently. Excellent Problem-Solving Skills: Demonstrated excellence in problem-solving, with the ability to tackle complex issues and find effective solutions. Collaborative Team Player: Ability to work seamlessly as part of a team, contributing to collective goals and fostering a collaborative work environment. skill set required Backend - Nodejs, Python Kubernetes (operator level skills are good) DevOps pipelines Compliance understanding broad level (good to have)
Posted 2 weeks ago
4.0 - 9.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined
Posted 2 weeks ago
5.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Design, develop, and maintain scalable and efficient Python applications using frameworks like FastAPI or Flask. Develop, test, and deploy RESTful APIs to interact with front-end services. Integrate and establish connections between various relational and non-relational databases (e.g., SQL Alchemy, MySQL, PostgreSQL, MongoDB, etc.). Solid understanding of relational and NoSQL databases and the ability to establish and manage connections from Python applications. Write clean, maintainable, and efficient code, following coding standards and best practices. Leverage AWS cloud services for deploying and managing applications (e.g., EC2, Lambda, RDS, S3, etc.). Troubleshoot and resolve software defects, performance issues, and scalability challenges.
Posted 2 weeks ago
2.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Design, develop, and maintain scalable and efficient Python applications using frameworks like FastAPI or Flask Develop, test, and deploy RESTful APIs to interact with front-end services Integrate and establish connections between various relational and non-relational databases (eg, SQL Alchemy, MySQL, PostgreSQL, MongoDB, etc) Solid understanding of relational and NoSQL databases and the ability to establish and manage connections from Python applications Write clean, maintainable, and efficient code, following coding standards and best practices Leverage AWS cloud services for deploying and managing applications (eg, EC2, Lambda, RDS, S3, etc) Troubleshoot and resolve software defects, performance issues, and scalability challenges
Posted 2 weeks ago
8.0 - 13.0 years
9 - 14 Lacs
Bengaluru
Work from Office
8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.
Posted 2 weeks ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 2 weeks ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram, Bengaluru
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 2 weeks ago
3.0 - 7.0 years
12 - 22 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Experience: 3-7 years. Responsibilities: Design, develop, test & maintain software solutions using .NET , C# & Azure Cloud. Collaborate with cross-functional teams on RESTful services, ASP.NET ,microservices, Azure cloud, Azure Functions and Storage.
Posted 2 weeks ago
9.0 - 13.0 years
20 - 27 Lacs
Bengaluru
Remote
5+ years of Experience in Java development. Strong experience with Spring/Spring Boot framework.. Proficiency in developing and consuming RESTful APIs. Experience with cloud platforms such as AWS or Azure (cloud-native service utilization is a must).
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France