Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0.0 - 2.0 years
0 Lacs
Yerawada, Pune, Maharashtra
On-site
Job Title : Python Developer Job Type: Hybrid Experience: Minimum 2 to 4 years Salary: 4 to 8 LPA (Negotiable) No.of.Vacancies: 01 Company Name: Whiz IT Services Pvt.Ltd., Location : 91Springboard, Creaticity Mall, Shastrinagar, Yerawada, Pune, Maharashtra 411006 Job Overview: We are hiring a skilled and experienced AWS Python Developer/Programmer to join our client’s team. The ideal candidate will have strong expertise in Python development and deep, hands-on experience working with various AWS services. Required Skills: ● Design, develop, and deploy scalable applications using Python on AWS ● Integrate and manage AWS services such as Lambda, EC2, S3, DynamoDB, RDS, etc. ● Collaborate with cross-functional teams to define, design, and deliver new features ● Troubleshoot, debug, and optimize code for performance and scalability ● Ensure best practices in security, compliance, and code quality ● Strong experience in Python programming ● Extensive hands-on experience with AWS services ● Experience with cloud architecture and infrastructure as code (e.g., CloudFormation, Terraform) is a plus ● Familiarity with CI/CD tools and version control (e.g., Git) ● Need to pass hackerrank test ● Need good python programming knowledge ● Minimum 2–4 years of relevant experience ● Available to join immediately Job Types: Full-time, Permanent Pay: From ₹50,000.00 per month Location Type: In-person Schedule: Fixed shift Monday to Friday Experience: Python: 2 years (Required) AWS: 2 years (Required) Total: 2 years (Required) Work Location: In person Speak with the employer +91 9080136167
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language Want to be a part of start-up environment within Amazon to design and build a new Fintech Payments product right from the scratch? Want to enable hundreds of millions of Amazon customers to shop on Amazon using next generation credit products? Want to be a part of the team that will enable you to deliver products handling highly sensitive customer data, at high traffic and minimum latency while handling cross-region calls if required? Want to be a part of the team that will enable you to learn latest technologies and skills relevant in the software development industry? Amazon India Emerging Payments team is looking for software developers who are passionate to design and build the next generation Payments product from the ground up. Once built this highly reliable and scalable product will provide a new payment gateway to hundreds of millions of Amazon India customers. The team will require learning and using latest AWS technologies including: AWS Dacia, AWS Kinesis, Lambda, SNS, SQS, Server side encryption on DynamoDB using client managed keys, API Gateways, AWS VPC, AWS NLB, Cloud trail, Elastic search, etc. Additionally the team also provide opportunities to learn and work on Machine learning, interacting and influencing Amazon third party partners like Banks and NBFCs. The platform will be designed to support other emerging economies having similar requirements and the role provides a huge opportunity for the developers to build a strong portfolio of patents for Amazon. Developers in the team need to have a strong understanding of computer fundamentals and preferably experience in building large scale distributed systems. Experience with web-based applications and/or web services-based applications, especially at massive scale, would also be helpful. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Job title: Full Stack Developer About The Job We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve people’s lives. We’re also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible. Ready to get started? Our Team: The Web Application Platform team enables the "Factories of the Future" by developing best-in-class web apps and APIs that support the integration between the data platform and web applications. We are a team dedicated to fostering knowledge and expertise in well-crafted and sustainable software development. What You Will Be Doing: Main responsibilities: Develop and maintain the backend services with Node.js (Nest.js framework) and Vue.js frontend apps Work with AWS cloud services and Terraform/Terragrunt to deploy infrastructure Suggest new innovative patterns to improve the software development process Write clean, reusable, and scalable code and tests using TypeScript/JavaScript Perform code reviews and ensure code quality and adherence to coding and architectural standards Troubleshoot and debug software issues Continuously learn and keep up to date with the latest technologies and industry trends About You We are seeking a Full Stack Developer with a strong background in TypeScript and JavaScript, capable of developing and maintaining both backend (Node.js with Nest.js framework) and frontend (Vue.js or other modern SPA frameworks) components. The ideal candidate should have expertise in database management, API design (REST and GraphQL), and deployment using AWS cloud services and infrastructure-as-code tools like Terraform/Terragrunt. Soft skills: Being able to analyze problems, think critically, and develop creative solutions is crucial. Good communication skills and ability to work in a team environment. Ideal candidates possess a growth mindset, embracing change and demonstrating a willingness to learn and adapt. Technical skills: Should be adept at writing clean, efficient, and maintainable code using TypeScript and JavaScript to develop both backend and frontend components. It is crucial to have a solid understanding of development concepts, such as working with databases, designing, and implementing APIs (including REST and GraphQL), and handling server-side logic. Ideal candidate should have practical experience working with AWS services, such as EC2, S3, Lambda, and DynamoDB, to deploy and manage cloud-based applications. Additionally, proficiency in infrastructure-as-code tools, is important for efficiently provisioning and managing infrastructure resources in an automated and scalable manner. Experience: 4 to 7 years Education: While not mandatory, relevant educational background or certifications related to software development would be a plus. Languages: Fluency in written and spoken English About You Expertise in Learning Management System administration, especially CSOD. Strong project management skills & Strong customer orientation Analytical and problem-solving skills Rigorous and quality oriented Multi-tasking and ability to self-organize and prioritize in a fast-paced environment. Recognized team player. Ability to manage multiple stakeholders in a global network and to be remotely managed. Solid decision-making skills Facilitation and training/coaching capabilities Good communication (verbal/written skills) Good command of English language Bachelor’s degree in human resources, Business Administration, Information technology or other related fields 5+ years of experience in learning processes and system administration. Shared services center and vendor management experience. Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities Develop, maintain, and optimize server-side applications using Python and Django. Design and implement RESTful APIs to support front-end functionalities. Work with cloud platforms, specifically AWS, to manage and deploy applications. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Optimize applications for maximum speed and scalability. Develop and maintain databases and data storage solutions. Troubleshoot and debug applications to ensure high-quality and performance standards. Implement security and data protection measures. Participate in code reviews and contribute to continuous improvement initiatives. Handle both synchronous and asynchronous programming tasks to improve application performance and Skills and Experience : 5 to 7 years of experience in backend development using Python and Django. Hands-on experience with AWS services, including EC2, S3, RDS, Lambda, and more. Strong understanding of web technologies such as HTTP, REST, and JSON. Experience with relational databases like PostgreSQL or MySQL and familiarity with ORM (Object Relational Mapping). Proficiency in designing and developing RESTful APIs. Familiarity with version control systems like Git. Experience with Continuous Integration/Continuous Deployment (CI/CD) tools and pipelines. Knowledge of best practices for software development, including code reviews, testing, and documentation. Strong problem-solving skills and ability to work independently and in a team environment. Good knowledge of Celery for managing asynchronous tasks and background jobs. Experience with Redis for caching and message brokering. Understanding of synchronous and asynchronous programming Qualifications : Experience with containerization and orchestration tools like Docker and Kubernetes. Familiarity with microservices architecture and serverless computing. Knowledge of other backend frameworks or languages such as Flask, FastAPI,Django. Good To Have Understanding of front-end technologies (e.g., JavaScript, HTML, CSS) for better collaboration with front-end teams. Experience with Agile/Scrum Skills : Dynamodb. API Gateway. Secrets Manager. s3. Lambda. Shield. ECS. Amplify. Cloudfront. RDS. OpenSearch (Elastic : 5+ years. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and : Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect,design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to : SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability have : Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions. Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub to have : Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Kochi, Coimbatore, Thiruvananthapuram
Work from Office
Must have skills installation, configuration and management of Linux /Windows systems Good to have skills:" JIRA/Confluence Experience 3.5 - 5 years of experience is required Educational QualificationGraduation (Accurate educational details should capture) Job Summary As a L2 Cloud Operations Engineer , you will be operating an e-commerce solution built over on-prem and cloud infrastructure. You will be involved in maintaining and improving the clients business platforms and also will be responsible for the site reliability and platform stability. You will be expected to respond to incidents, support on problems, execute changes and be part of a project to improve or reengineer the platform. Roles and Responsibilities Continuous monitoring of the platforms performance and uptime Fast identification and resolution of incidents Resolution of service requests Managing the platform configuration to ensure it is optimized and up to date Improved efficiency by automating routine tasks Professional and Technical Skills You must have a strong technical aptitude and an organized, process driven work ethic. 3 .5 -5 years of relevant experience with installation, configuration and management of Linux / Windows systems. Strong working experience in managing and maintaining public clouds like AWS, Azure or GCP. Strong experience in setting up and configuring monitoring tools like Prometheus, Grafana, Zabbix etc Strong Experience with installation/configuration of Java application servers such as Jboss /WebLogic/Tomcat and also analyzing application logs, GC logs for troubleshooting performance and functional issues. Hands on experience in cloud provisioning tools like Terraform/CloudFormation will be an added advantage" Hands on experience with Docker/Kubernetes will be an added advantage Experience in ELK/Kafka/ Openshift /Python script will be an added advantage Good knowledge of SQL and" NoSQL databases like MySQL/Oracle/PostgreSQL/DynamoDB/MongoDB/Cassandra/Redis You will" have strong written and verbal communications skills and a track record for providing high customer satisfaction. Develop automation scripts as needed to enhance operational efficiencies Has prior experience in supporting Jira/Confluence or any other service management tool Prior experience working in Agile environment will be an advantage. Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Job Profile Summary The Cloud NoSQL Database Engineer performs database engineering and administration activities, including design, planning, configuration, monitoring, automation, self-serviceability, alerting, and space management. The role involves database backup and recovery, performance tuning, security management, and migration strategies. The ideal candidate will lead and advise on Neo4j and MongoDB database solutions, including migration, modernization, and optimization, while also supporting secondary RDBMS platforms (SQL Server, PostgreSQL, MySQL, Oracle). The candidate should be proficient in workload migrations to Cloud (AWS/Azure/GCP). Key Responsibilities: MongoDB Administration: Install, configure, and maintain Neo4j (GraphDB) and MongoDB (NoSQL) databases in cloud and on-prem environments. NoSQL Data Modeling: Design and implement graph-based models in Neo4j and document-based models in MongoDB to optimize data retrieval and relationships. Performance Tuning & Optimization: Monitor and tune databases for query performance, indexing strategies, and replication performance. Backup, Restore, & Disaster Recovery: Design and implement backup and recovery strategies for Neo4j, MongoDB, and secondary database platforms. Migration & Modernization: Lead database migration strategies, including homogeneous and heterogeneous migrations between NoSQL, Graph, and RDBMS platforms. Capacity Planning: Forecast database growth and plan for scalability, optimal performance, and infrastructure requirements. Patch Management & Upgrades: Plan and execute database software upgrades, patches, and service packs. Monitoring & Alerting: Set up proactive monitoring and alerting for database health, performance, and potential failures using Datadog, AWS CloudWatch, Azure Monitor, or Prometheus. Automation & Scripting: Develop automation scripts using Python, AWS CLI, PowerShell, Shell scripting to streamline database operations. Security & Compliance: Implement database security best practices, including access controls, encryption, key management, and compliance with cloud security standards. Incident & Problem Management: Work within ITIL frameworks to resolve incidents, service requests, and perform root cause analysis for problem management. High Availability & Scalability: Design and manage Neo4j clustering, MongoDB replication/sharding, and HADR configurations across cloud and hybrid environments. Vendor & Third-Party Tool Management: Evaluate, implement, and manage third-party tools for Neo4j, MongoDB, and cloud database solutions. Cross-Platform Database Support: Provide secondary support for SQL Server (Always On, Replication, Log Shipping), PostgreSQL (Streaming Replication, Partitioning), MySQL (InnoDB Cluster, Master-Slave Replication), and Oracle (RAC, Data Guard, GoldenGate). Cloud Platform Expertise: Hands-on with cloud-native database services such as AWS DocumentDB, DynamoDB, Azure CosmosDB, Google Firestore, Google BigTable. Cost Optimization: Analyze database workload, optimize cloud costs, and recommend licensing enhancements. Shape Knowledge & Skills: Strong expertise in Neo4j (Cypher Query Language, APOC, Graph Algorithms, GDS Library) and MongoDB (Aggregation Framework, Sharding, Replication, Indexing). Experience with homogeneous and heterogeneous database migrations (NoSQL-to-NoSQL, Graph-to-RDBMS, RDBMS-to-NoSQL). Familiarity with database monitoring tools such as Datadog, Prometheus, CloudWatch, Azure Monitor. Proficiency in automation using Python, AWS CLI, PowerShell, Bash/Shell scripting. Experience in cloud-based database deployment using AWS RDS, Aurora, DynamoDB, Azure SQL, Azure CosmosDB, GCP Cloud SQL, Firebase, BigTable. Understanding of microservices and event-driven architectures, integrating MongoDB and Neo4j with applications using Kafka, RabbitMQ, or AWS SNS/SQS. Experience with containerization (Docker, Kubernetes) and Infrastructure as Code (Terraform, CloudFormation, Ansible). Strong analytical and problem-solving skills for database performance tuning and optimization. Shape Education & Certifications: Bachelor’s degree in Computer Science, Information Systems, or a related field. Database Specialty Certifications in Neo4j and MongoDB (Neo4j Certified Professional, MongoDB Associate/Professional Certification). Cloud Certifications (AWS Certified Database - Specialty, Azure Database Administrator Associate, Google Cloud Professional Data Engineer). Preferred Experience: 5+ years of experience in database administration with at least 3 years dedicated to Neo4j and MongoDB. Hands-on experience with GraphDB & NoSQL architecture and migrations. Experience working in DevOps environments and automated CI/CD pipelines for database deployments. Strong expertise in data replication, ETL, and database migration tools such as AWS DMS, Azure DMS, MongoDB Atlas Live Migrate, Neo4j ETL Tool. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
About The Role We are seeking an experienced AWS Migration Engineer to lead and execute end-to-end cloud migration projects. The ideal candidate will have a strong background in AWS services, networking, and cloud architecture. Key Responsibilities Design and implement end-to-end AWS migration strategies Lead complex migration projects from planning to execution Optimize existing infrastructure for cloud environments Collaborate with cross-functional teams to ensure smooth transitions Provide technical guidance and mentorship to junior team members Implement and manage secure, scalable network architectures in AWS Required Skills And Qualifications 5+ years of experience in cloud migrations, with a focus on AWS Deep knowledge of AWS migration services (e.g., AWS Migration Hub, AWS Application Migration Service, AWS Server Migration Service) Strong understanding of networking concepts and implementations in AWS, including: VPN and AWS Direct ConnectVPC peering and AWS Transit Gateway Network Address Translation, (NAT)Route tables and subnets Security Groups and Network Access Control Lists (NACLs) Expertise in a wide range of AWS services, including but not limited to: Compute: EC2, Lambda, ECS, EKS Storage: S3, EBS, EFS Database: RDS, DynamoDB, Aurora Networking: VPC, Route 53, Cloud Front Security: IAM, AWS WAF, AWS Shield Management: CloudWatch, CloudTrail, AWS Config Experience with AWS Dedicated Hosts and EC2 Dedicated Instances Proficiency in infrastructure-as-code tools (e.g., Terraform, CloudFormation) Strong scripting skills in languages such as Python, Bash, or PowerShell Experience with Windows and Linux operating systems Excellent problem-solving and communication skills Working knowledge of configuration management tool Ansible Preferred Qualification Experience with containerization and orchestration (e.g., Docker, Kubernetes) Familiarity with database migration techniques and tools (e.g., AWS Database Migration Service) Understanding of security best practices in cloud environments Experience with hybrid cloud architectures Familiarity with AWS Organizations and multi-account strategies Good to have Citrix, Azure and SQL knowledge Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
6.0 - 10.0 years
35 - 37 Lacs
Bengaluru
Work from Office
Job Overview : We are seeking a highly skilled Senior Platform Engineer with a robust background in Python programming and extensive experience with AWS services. With at least 6 years of relevant experience, the ideal candidate will be an expert in serverless development and event-driven architecture design. This position is geared towards a proactive and passionate engineer eager to take ownership of modules within our cloud management platform, contributing significantly to its scalability, efficiency, and innovation. You'll have the opportunity to work on cutting-edge technology, shaping the future of our cloud management platform with your expertise. If you're passionate about building scalable, efficient, and innovative cloud solutions, we'd love to have you on our team. Responsibilities : -Take full ownership of developing, maintaining, and enhancing specific modules of our cloud management platform, ensuring they meet our standards for scalability, efficiency, and reliability. -Design and implement serverless applications and event-driven systems that integrate seamlessly with AWS services, driving the platform's innovation forward. -Work closely with cross-functional teams to conceptualize, design, and implement advanced features and functionalities that align with our business goals. -Utilize your deep expertise in cloud architecture and software development to provide technical guidance and best practices to the engineering team, enhancing the platform's capabilities. -Stay ahead of the curve by researching and applying the latest trends and technologies in the cloud industry, incorporating these insights into the development of our platform. -Solve complex technical issues, providing advanced support and guidance to both internal teams and external stakeholders. Requirements: -A minimum of 6 years of relevant experience in platform or application development, with a strong emphasis on Python and AWS cloud services. -Proven expertise in serverless development and event-driven architecture design, with a track record of developing and shipping high-quality SaaS platforms on AWS. -Comprehensive understanding of cloud computing concepts, architectural best practices, and AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway. -Solid knowledge of Object-Oriented Programming (OOP), SOLID principles, and experience with relational and NoSQL databases. -Proficiency in developing and integrating RESTful APIs and familiarity with source control systems like Git. -Exceptional problem-solving skills, capable of optimizing complex systems. -Excellent communication skills, capable of effectively collaborating with team members and engaging with stakeholders. -A strong drive for continuous learning and staying updated with industry developments. Nice to Have : -AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications. -Experience with the AWS Boto3 SDK for Python. -Exposure to other cloud platforms such as Azure or GCP. -Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes.
Posted 2 weeks ago
5.0 - 10.0 years
35 - 37 Lacs
Bengaluru
Work from Office
Job Overview : We are seeking a highly skilled Senior Platform Engineer with a robust background in Python programming and extensive experience with AWS services. With at least 6 years of relevant experience, the ideal candidate will be an expert in serverless development and event-driven architecture design. This position is geared towards a proactive and passionate engineer eager to take ownership of modules within our cloud management platform, contributing significantly to its scalability, efficiency, and innovation. You'll have the opportunity to work on cutting-edge technology, shaping the future of our cloud management platform with your expertise. If you're passionate about building scalable, efficient, and innovative cloud solutions, we'd love to have you on our team. Responsibilities : -Take full ownership of developing, maintaining, and enhancing specific modules of our cloud management platform, ensuring they meet our standards for scalability, efficiency, and reliability. -Design and implement serverless applications and event-driven systems that integrate seamlessly with AWS services, driving the platform's innovation forward. -Work closely with cross-functional teams to conceptualize, design, and implement advanced features and functionalities that align with our business goals. -Utilize your deep expertise in cloud architecture and software development to provide technical guidance and best practices to the engineering team, enhancing the platform's capabilities. -Stay ahead of the curve by researching and applying the latest trends and technologies in the cloud industry, incorporating these insights into the development of our platform. -Solve complex technical issues, providing advanced support and guidance to both internal teams and external stakeholders. Requirements: -A minimum of 6 years of relevant experience in platform or application development, with a strong emphasis on Python and AWS cloud services. -Proven expertise in serverless development and event-driven architecture design, with a track record of developing and shipping high-quality SaaS platforms on AWS. -Comprehensive understanding of cloud computing concepts, architectural best practices, and AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway. -Solid knowledge of Object-Oriented Programming (OOP), SOLID principles, and experience with relational and NoSQL databases. -Proficiency in developing and integrating RESTful APIs and familiarity with source control systems like Git. -Exceptional problem-solving skills, capable of optimizing complex systems. -Excellent communication skills, capable of effectively collaborating with team members and engaging with stakeholders. -A strong drive for continuous learning and staying updated with industry developments. Nice to Have : -AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications. -Experience with the AWS Boto3 SDK for Python. -Exposure to other cloud platforms such as Azure or GCP. -Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Presidio, Where Teamwork and Innovation Shape the Future At Presidio, we’re at the forefront of a global technology revolution, transforming industries through cutting-edge digital solutions and next-generation AI. We empower businesses—and their customers—to achieve more through innovation, automation, and intelligent insights. The Role Presidio is looking for a Senior Engineer to join our engineering team. In this role, you will work closely with our engineering team to design, develop, and implement applications. You will be responsible for ensuring the performance, scalability, and development. Mandatory Skills Hands-on experience with web development in any of the following programming languages: Java, .Net, Python, Go, C#, JavaScript Hands-on experience in any of the following JavaScript framework: Angular, React, js with NodeJS Experience with back-end development, basic microservices implementation and containerization using Docker Expertise in Relational databases such as Postgres, MySQL, Oracle, Expertise in NoSQL DB such as MongoDB, Amazon DynamoDB, Cassandra, Good Knowledge with any of the cloud providers such as Amazon Web Services, Microsoft Azure or Google Excellent verbal and written communication Role Description Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear Prepares and installs solutions by determining and designing system specifications, standards, and Improves operations by conducting systems analysis, recommending changes in policies and procedures. Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations. Protects operations by keeping information Provides information by collecting, analyzing, and summarizing development and service issues. Accomplishes engineering and organization mission by completing related results as needed. Supports and develops software engineers by providing advice, coaching, and educational opportunities. Your future at Presidio Joining Presidio means stepping into a culture of trailblazers—thinkers, builders, and collaborators—who push the boundaries of what’s possible. With our expertise in AI-driven analytics, cloud solutions, cybersecurity, and next-gen infrastructure, we enable businesses to stay ahead in an ever-evolving digital world. Here, your impact is real. Whether you're harnessing the power of Generative AI, architecting resilient digital ecosystems, or driving data-driven transformation, you’ll be part of a team that is shaping the future. Ready to innovate? Let’s redefine what’s next—together. About Presidio At Presidio, speed and quality meet technology and innovation. Presidio is a trusted ally for organizations across industries with a decades-long history of building traditional IT foundations and deep expertise in AI and automation, security, networking, digital transformation, and cloud computing. Presidio fills gaps, removes hurdles, optimizes costs, and reduces risk. Presidio’s expert technical team develops custom applications, provides managed services, enables actionable data insights and builds forward-thinking solutions that drive strategic outcomes for clients globally. For more information, visit www.presidio.com . Presidio is committed to hiring the most qualified candidates to join our amazing culture. We aim to attract and hire top talent from all backgrounds, including underrepresented and marginalized communities. We encourage women, people of color, people with disabilities, and veterans to apply for open roles at Presidio. Diversity of skills and thought is a key component to our business success. Recruitment Agencies, Please Note: Presidio does not accept unsolicited agency resumes/CVs. Do not forward resumes/CVs to our careers email address, Presidio employees or any other means. Presidio is not responsible for any fees related to unsolicited resumes/CVs. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Onsite – Chennai & Bengaluru, India Experience: 5+ Years Employment Type: Full-Time Job Overview: We are looking for an experienced and proactive Node.js Developer with a strong background in building scalable backend systems, APIs, and services. This is a full-time onsite position based in Chennai or Bengaluru , offering an opportunity to work on high-impact projects using modern backend technologies. Key Responsibilities: Design, develop, and maintain robust and scalable backend applications using Node.js . Build and consume RESTful APIs , ensuring high performance and responsiveness. Lead the integration of third-party services and backend logic with front-end components. Work closely with architects, DevOps, and product teams to deliver quality solutions. Implement security and data protection best practices in all backend services. Write clean, well-documented, and efficient code following industry standards. Participate in code reviews, testing, and performance tuning. Guide and mentor junior developers as needed. Required Skills & Qualifications: 5+ years of professional experience in Node.js backend development. Expertise in JavaScript (ES6+) , asynchronous programming, and Node.js frameworks like Express.js or NestJS . Strong experience in designing and consuming RESTful APIs . Solid understanding of database systems , especially NoSQL databases like MongoDB , Redis , or DynamoDB . Experience with tools like Postman , Git , and JIRA . Working knowledge of containerized environments (Docker/Kubernetes) is a plus. Familiarity with CI/CD pipelines and cloud environments (AWS/GCP/Azure) is advantageous. Strong problem-solving skills and the ability to work independently or within a team. Preferred Skills: Experience with microservices architecture . Exposure to frontend technologies (React/Angular) is a plus. Knowledge of TypeScript is an added advantage. Education: Bachelor’s or Master’s degree in Computer Science , Engineering , or a related field. Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
About The Role Grade Level (for internal use): 09 The Role: Platform Engineer Department Overview PVR DevOps is a global team that provides specialized technical builds across a suite of products. DevOps members work closely with the Development, Testing and Client Services teams to build and develop applications using the latest technologies to ensure the highest availability and resilience of all services. Our work helps ensure that PVR continues to provide high quality service and maintain client satisfaction. Position Summary S&P Global is seeking a highly motivated engineer to join our PVR DevOps team in Noida. DevOps is a rapidly growing team at the heart of ensuring the availability and correct operation of our valuations, market and trade data applications. The team prides itself on its flexibility and technical diversity to maintain service availability and contribute improvements through design and development. Duties & Accountabilities The role of Principal DevOps Engineer is primarily focused on building functional systems that improve our customer experience. Responsibilities include: Creating infrastructure and environments to support our platforms and applications using Terraform and related technologies to ensure all our environments are controlled and consistent. Implementing DevOps technologies and processes, e.g: containerisation, CI/CD, infrastructure as code, metrics, monitoring etc Automating always Supporting, monitoring, maintaining and improving our infrastructure and the live running of our applications Maintaining the health of cloud accounts for security, cost and best practices Providing assistance to other functional areas such as development, test and client services. Knowledge, Skills & Experience Strong background of At least 3 to 5 years of experience in Linux/Unix Administration in IaaS / PaaS / SaaS models Deployment, maintenance and support of enterprise applications into AWS including (but not limited to) Route53, ELB, VPC, EC2, S3, ECS, SQS Good understanding of Terraform and similar ‘Infrastructure as Code’ technologies Strong experience with SQL and NoSQL databases such MySQL, PostgreSQL, DB/2, MongoDB, DynamoDB Experience with automation/configuration management using toolsets such as Chef, Puppet or equivalent Experience of enterprise systems deployed as micro-services through code pipelines utilizing containerization (Docker) Working knowledge, understanding and ability to write scripts using languages including Bash, Python and an ability to understand Java, JavaScript and PHP Personal competencies Personal Impact Confident individual – able to represent the team at various levels Strong analytical and problem-solving skills Demonstrated ability to work independently with minimal supervision Highly organised with very good attention to detail Takes ownership of issues and drives through the resolution. Flexible and willing to adapt to changing situations in a fast moving environment Communication Demonstrates a global mindset, respects cultural differences and is open to new ideas and approaches Able to build relationships with all teams, identifying and focusing on their needs Ability to communicate effectively at business and technical level is essential. Experience working in a global-team Teamwork An effective team player and strong collaborator across technology and all relevant areas of the business. Enthusiastic with a drive to succeed. Thrives in a pressurized environment with a “can do” attitude Must be able to work under own initiative About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 309235 Posted On: 2025-06-04 Location: Noida, Uttar Pradesh, India Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Role: Senior Python Backend Engineer Location: Remote, India Fulltime. (Need Immediate Joiner) We are looking at 10+ YOE for the Senior and 7-10 YOE for the mid-level . Job Description: • Deep hands-on experience with Python • SQL & NoSQL required, • Need to be able to dockerize their microservices they built, but not setting up pods, services, deploying, etc. • Proven expertise in microservices architecture, containerization (Docker), and cloud-native app development (any cloud). • Build and scale RESTful APIs, async jobs, background schedulers, and data pipelines for high-volume systems. • Strong understanding of API design, rate limiting, secure auth (OAuth2), and best practices. • Create and optimize NoSQL and SQL data models (MongoDB, DynamoDB, PostgreSQL, ClickHouse) Soft Skills Clear communication, ownership mindset and self-driven IMP: Also, the person should have worked in a high-volume production scale job, not POCs or smallscale work. Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Ab Initio with ETL Tester: Hands on 3-5 years of experience in ETL / Data Warehousing Preferably Ab Initio Hands on 3-5 years experience in Oracle Advanced SQL (ability to construct and execute complex SQL queries understand Oracle errors) Hands on Experience in API testing (Fine to have one of the resource have this skill) Hands experience in Unix Good Analytical reporting communication skills Lead the scrum team in using Agile methodology and scrum practices Helping the product owner and development team to achieve customer satisfaction Lead the scrum and development teams in self-organization Remove impediments and coach the scrum team on removing impediments Help the scrum and development teams to identify and fill in blanks in the Agile framework Resolve conflicts and issues that occur Help the scrum team achieve higher levels of scrum maturity Support the product owner and provide education where needed Required Skills Knowledge on Tool and integration with CI/CD tools like Jenkins Travis CI or AWS CodePipeline. Collaborate with clients to understand their business requirements and design custom contact center solutions using AWS Connect. Demonstrate deep knowledge of AWS Connect and its integration with other AWS services including Lambda S3 DynamoDB and others. Prior experience of 3+ on a scrum team Must have AWS Connect Knowledge Ability to analyze and think quickly and to resolve conflict Knowledgeable in techniques to fill in gaps in the scrum Ability to determine what is scrum and what is not Experience with successful Agile techniques Ability to work with and lead a team Strong communication interpersonal and mentoring skills Ability to adapt to a changing environment Self-motivation and ability to stay focused in the middle of distraction.
Posted 2 weeks ago
3.0 - 6.0 years
4 - 8 Lacs
Chennai
Work from Office
We re looking for an experienced Amazon Connect Expert to join our team and help design, configuration, and integration of advanced contact center solutions using Amazon Connect. The ideal candidate has deep technical knowledge of Amazon Connect and its ecosystem, and is skilled in creating comprehensive documentation, including user-friendly step-by-step guides for end users and technical teams. You ll play a key role in helping us build modern, scalable, and efficient customer service experiences that integrate seamlessly with internal systems and third-party platforms. Primary responsibilities will include: Amazon Connect Architecture & Configuration Designing, configuring, and maintaining Amazon Connect environments, including contact flows, routing profiles, hours of operation, and agent experience Building and optimizing Lex bots, Lambda functions, and Amazon Connect Tasks Integration & Automation Integrating Amazon Connect with CRMs (e.g., Salesforce, HubSpot), ticketing platforms (e.g., ServiceNow, Zendesk), and internal tools via APIs, AWS Lambda, EventBridge, S3, DynamoDB, etc Developing automation scripts and workflows to streamline operations and reduce manual work Documentation & Guides Creating step-by-step guides, user manuals, knowledge base articles, and training materials for various stakeholders (agents, supervisors, developers) Maintaining documentation for architecture diagrams, integration patterns, and deployment processes Project Leadership & Support Leading or collaborating on the implementation of new contact center features and migrations Troubleshooting issues, monitoring performance, and ensuring high availability and compliance with SLAs Collaboration & Enablement Working closely with business stakeholders, engineers, and IT teams to translate contact center needs into scalable Amazon Connect solutions Training and mentoring internal teams on best practices, configuration, and ongoing support The Candidate: Required skills/qualifications: Experience with Amazon Connect, including complex setups and integrations (at least 3 year preferred) Deep understanding of AWS services commonly used with Connect (Lambda, S3, Lex, DynamoDB, CloudWatch, etc.) Hands-on experience integrating Amazon Connect with CRMs and third-party platforms Strong documentation skills ability to write clear, concise, and visually helpful step-by-step instructions Familiarity with contact center metrics, KPIs, and customer experience best practices Experience with scripting or coding in Python, JavaScript, or Node.js is a plus AWS certification (especially in Connect or Solutions Architect) is a plus Fluency in written and spoken English at CEF C1 level or above Preferred skills/qualifications: Experience with Amazon Connect Cases, Wisdom, Tasks, and Contact Lens Understanding of TCPA compliance and secure call handling UI/UX understanding for agent and customer interfaces Experience working in regulated industries (e.g., healthcare, finance, Education)
Posted 2 weeks ago
3.0 - 5.0 years
4 - 6 Lacs
Hyderabad
Work from Office
Responsibilities Design and implement front-end applications using Angular. Develop serverless applications using AWS Lambda, API Gateway, DynamoDB, S3, AWS Step Functions, and other AWS services.. Develop server-side logic using Node.js . Collaborate with UI/UX designers to ensure seamless integration. Optimize applications for maximum speed and scalability. Write reusable, testable, and efficient code. Integrate data storage solutions like MongoDB, PostgreSQL, etc. Participate in code reviews and provide constructive feedback. Work with cross-functional teams to define project requirements and deliverable timelines. Troubleshoot and debug applications as necessary. Stay up-to-date with emerging technologies and industry trends. Qualifications Proven experience as a Full Stack Developer or a similar role. .experience in developing serverless applications on AWS Strong proficiency in TypeScript Experience with Node.js, and Angular framework. Understanding of RESTful API design and implementation. Familiarity with database technologies such as MongoDB and SQL. Good problem-solving skills and attention to detail. Excellent communication and teamwork abilities.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Role :: Python Backend Engineer Location :: India/Remote Fulltime Please note: 2. Requirements for both the positions remain the same except for the years of experience. 3. Pls upload in respective position ids and share an email as well - Senior BE(34311) and Mid-level BE(34310). 4. ONLY CANDIDATES WITH IMMEDIATE - 1 WEEK AVAILABILITY should be shared. 5. Candidate time slots of availability for interview ( min 2 time slots) also to be shared on the email. Technical and Soft skill requirements: • Deep hands-on experience with Python • SQL & NoSQL required, • Need to be able to dockerize their microservices they built, but not setting up pods, services, deploying, etc. • Proven expertise in microservices architecture, containerization (Docker), and cloud-native app development (any cloud). Build and scale RESTful APIs, async jobs, background schedulers, and data pipelines for high-volume systems. Strong understanding of API design, rate limiting, secure auth (OAuth2), and best practices. • Create and optimize NoSQL and SQL data models (MongoDB, DynamoDB, PostgreSQL, ClickHouse) Soft Skills Clear communication, ownership mindset and self-driven IMP: Also the person should have worked in a high volume production scale job, not POCs or small scale work. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chandigarh, India
On-site
Products, R&D and Production business function is responsible for ensuring that Basware continues to provide its customers with industry-leading SaaS solutions to simplify operations and spend smarter. This global function creates and executes the product strategy, new product innovations, roadmap, design, product development and product launches for Basware’s global product portfolio. Together with other Basware units, Products, R&D, and Production ensures that our customers can successfully meet their business objectives through cooperating with Basware and using our product and service portfolio. We are looking for a Senior Cloud Developer to develop highly scalable cloud-based applications using state-of-the-art software and web technologies. As a Senior Cloud Developer, you’ll be part of our talented E2E Platform development team in charge of the Basware e-invoicing applications. The team oversees the building of a responsive, high-performance user experience for our cloud-based applications, accessed by over a million users across the globe Key role responsibilities: Design and build advanced, scalable web applications and microservices. Collaborate with cross-functional teams to define, design, and ship new features. Unit-test code for robustness, including edge cases, usability, and general reliability. Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Enhancing engineering team culture by demonstrating full-stack contributions. Work with product and business owners to understand product requirements. , In order to thrive and succeed in this role, we expect the following: Full stack engineer with 5+ years of experience. Must have expertise in C#/.Net Development Strong background with Test driven development (unit tests and integration tests) Strong Knowledge of Relational and NoSql databases preferably dynamodb Knowledge of working with CI/CD tools (Jenkins/Bamboo) and git. Strong knowledge AWS serverless architecture (AWS lambdas, API gateway etc) Familiarity with micro service architecture Good to have: Solid expertise in Angular and type script will be a huge plus. Interest in learning Python Qualifications: BE, BTech, MTech or MCA Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements The Wireless Core Network Development team is responsible for End to End network architecture, development, and operations including service orchestration and automation. The team designs, develops, maintains, and supports our Core Wireless Network and all its services related to Signalling and Roaming. We work as a team to introduce the latest technology and software to enable network orchestration and automation in the fast evolving 5G ecosystem, and propel TELUS’ digital transformation. Our team creates previously impossible solutions by leveraging the new approach to enable our customers unique and rich wireless experiences. These innovative solutions will improve the life quality of thousands while revolutionizing how everything and everyone connects. You will own the customer experience by providing strategy, managing change and leveraging best in class security and AI to deliver reliable products to our customers. This will represent a fundamental change on how the Telecom industry works, opening the possibility of making private cellular networks globally available, sparking innovation and enabling access to the digital world to more people by providing never seen reliability at reduced costs. What You'll Do (Key Responsibilities) Overall responsibility for the architecture, design and operational support of TELUS signaling solutions (DRA/SCP/SEPP/STP/GTPF/SIG FW/ Roaming VAS; This includes but is not limited to understanding fully how the current network is architected & identifying areas of improvement/modernization that we need to undertake driving reliability and efficiency in the support of the solution Strong knowledge of 3G/4G/5G networks and related architectures Work with Diameter/4G interfaces (Gy, Gx, Ro,Rf,s6a etc few to list ), SS7 , SIGTRAN protocols to support solution deployment and optimization. Working experience with troubleshooting tools - Netscout, Wireshark Help us design, develop, and implement software solutions supporting the STP/DRA/SCP/SEPP/GTPF/SIG FW/ Roaming VAS platform within the 5G core architecture. Help design, develop management, assurance and closed-loop automation of the STP/DRA/SCP/SEPP/GTPF/SIG FW/ Roaming VAS which will reside on cloud native services. Help design, develop, and implement monitoring (key performance indicators, metrics and log management) solutions for the STP/DRA/SCP/SEPP platform as part of network assurance of the platforms and the services that transverse the platforms. Bring your ideas, bring your coding skills, and bring your passion to learn. Identify E2E network control signaling and roaming gap, understand existing solutions and develop possible alternate solutions, and architect new future-friendly solutions as technology evolves". Collaborate with cross functional teams from Radio, Core, Transport, Infrastructure, Business and assurance domain, define migration strategies for moving services to cloud. Use your experience in security, configuration management, data model management and other technologies like RESTful API, JSON, NETCONF, Apache Nifi, Kafka, SNMP, Java, Bash, Python, HTTPS, HTTP2, and SSH to help bring and keep the solution in the live network providing service to our customers Maintain/develop Network Architecture/Design document Additional Job Description What you bring: 5+ years of telecommunication experience Interested in adapter API design and developing backend software Proven knowledge of technologies such as Service Based Architecture (SBA), Subscriber Data Management functions, http2, Diameter, Sigtran, SS7, and 5G Protocol Experience with containerization tools such as Docker, Kubernetes, and/or OpenStack technology General understanding of TCP/IP networking and familiarity with TCP, UDP, SS7 RADIUS, and Diameter protocols along with SOAP/REST API working principles Proven understanding of IPSEC, TLS 1.2, 1.3 and understanding of OAUTH 2.0 framework Advanced technical and analytical skills, and the ability to take responsibility for the overall technical direction of the project Experience with Public Cloud Native Services like Openshift, AWS, GCP or Azure Experience with open-source assurance tools like Prometheus, Grafana, Jaeger, Opensearch, Elastic search, etc. Expert knowledge of the software project lifecycle and CI/CD Pipelines A Bachelor degree in Computer Science, Computer Engineering, Electrical Engineering, STEM related field or relevant experience Great-to-haves: Understanding of 3GPP architectures and reference points for 4G and 5G wireless networks Knowledge of 3GPP, TMF, GSMA, IETF standard bodies Experience with Radio, Core, Transport and Infrastructure product design, development, integration, test and operations low level protocol implementation on top of SCTP, GTPv1 and GTPv2 Experience with MariaDB, Cassandra DB, MongoDB and Data Model Management AWS Fargate, Lambda, DynamoDB, SQS, Step Functions, CloudWatch, CloudFormation and/or AWS Cloud Development Kit Knowledge of Python, and API development in production environments 2 + years’ experience as a software developer Soft Skills: Strong analytical and problem-solving abilities Excellent communication skills, both written and verbal Ability to work effectively in a team environment Self-motivated with a proactive approach to learning new technologies Capable of working under pressure and managing multiple priorities EEO Statement At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Join our team and what we’ll accomplish together The Wireless Core Network Development team is responsible for End to End network architecture, development, and operations including service orchestration and automation. The team designs, develops, maintains, and supports our Core Wireless Network and all its services specific to our customer data. We work as a team to introduce the latest technology and software to enable network orchestration and automation in the fast evolving 5G ecosystem, and propel TELUS’ digital transformation. Our team creates previously impossible solutions by leveraging the new approach to enable our customers unique and rich wireless experiences. These innovative solutions will improve the life quality of thousands while revolutionizing how everything and everyone connects. You will own the customer experience by providing strategy, managing change and leveraging best in class security and AI to deliver reliable products to our customers. This will represent a fundamental change on how the Telecom industry works, opening the possibility of making private cellular networks globally available, sparking innovation and enabling access to the digital world to more people by providing never seen reliability at reduced costs. What you'll do Overall responsibility for the architecture, design and operational support of TELUS subscriber database solutions (HLR, HSS, EIR, IMEIDB, UDM, UDR); This includes but is not limited to understanding fully how the current network is architected & identifying areas of improvement/modernization that we need to undertake driving reliability and efficiency in the support of the solution Help us design, develop, and implement software solutions supporting the subscriber data platforms within the 5G core architecture.. This will include management, assurance and closed-loop of the UDM, AUSF and SDL which will reside on a cloud native services Bring your ideas, bring your coding skills, and bring your passion to learn Identify E2E network control signaling and roaming gap, available and ongoing design, together with architecting future-friendly solutions as technology evolves Collaborate with cross functional teams from Radio, Core, Transport, Infrastructure, Business and assurance domain, define migration strategies for moving services to cloud. Bring your experience in Open API, security, configuration, data model management and processing Node JS, and learn or bring your experience in other languages like RESTful, JSON, NETCONF, Apache Nifi, Kafka, SNMP, Java, Bash, Python, HTTPS, SSH TypeScript and Python Maintain/develop Network Architecture/Design document What you bring 5+ years of telecommunication experience Experienced in adapter API design using RESTful, NETCONF, interested in developing back-end software Proven knowledge of technologies such as Service Based Architecture (SBA), Subscriber Data Management functions, http2, Diameter, Sigtran, SS7, and 5G Protocol General understanding of TCP/IP networking and familiarity with TCP, UDP, SS7 RADIUS, and Diameter protocols along with SOAP/REST API working principles Proven understanding of IPSEC, TLS 1.2, 1.3 and understanding of OAUTH 2.0 framework 2 + years’ experience as a software developer, advanced technical and analytical skills, and the ability to take responsibility for the overall technical direction of the project Experience with Public Cloud Native Services like Openshift, AWS, GCP or Azure Expert knowledge in Database redundancy, replication, Synchronization Knowledge of different database concepts (relational vs non-relational DB) Subject Matter Expert in implementing, integrating, and deploying solutions related to subscriber data management (HLR, HSS, EIR, IMEIDB, UDM, UDR,F5, Provisioning GW, AAA on either private cloud or public cloud like AWS, OCP or GCP Expert knowledge of the software project lifecycle and CI/CD Pipelines A Bachelor degree in Computer Science, Computer Engineering, Electrical Engineering, STEM related field or relevant experience Additional Job Description Great-to-haves (If you have it great, if you don't we will teach you) Understanding of 3GPP architectures and reference points for 4G and 5G wireless networks Knowledge of 3GPP, TMF, GSMA, IETF standard bodies Experience with Radio, Core, Transport and Infrastructure product design, development, integration, test and operations low level protocol implementation on top of UDP, SCTP, GTPv1 and GTPv2 Experience with MariaDB, Cassandra DB, MongoDB and Data Model Management AWS Fargate, Lambda, DynamoDB, SQS, Step Functions, CloudWatch, CloudFormation and/or AWS Cloud Development Kit Knowledge of Python, and API development in production environments Experience with containerization tools such as Docker, Kubernetes, and/or OpenStack technology Soft Skills: Strong analytical and problem-solving abilities Excellent communication skills, both written and verbal Ability to work effectively in a team environment Self-motivated with a proactive approach to learning new technologies Capable of working under pressure and managing multiple priorities EEO Statement At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
DynamoDB is a popular NoSQL database service offered by Amazon Web Services (AWS) that is widely used by companies in India. The job market for dynamodb professionals in India is currently booming, with many opportunities available for skilled individuals.
The average salary range for dynamodb professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
A typical career path in dynamodb may involve progressing from roles such as Junior Developer to Senior Developer and eventually to a Tech Lead position. Opportunities for specialization in areas like database architecture or cloud solutions may also arise.
In addition to expertise in DynamoDB, professionals in this field are often expected to have knowledge of related technologies and tools such as AWS services, NoSQL databases, data modeling, and serverless architecture.
As you explore opportunities in the dynamodb job market in India, remember to stay updated on the latest trends and technologies in the field. Prepare thoroughly for interviews by honing your skills and showcasing your expertise confidently. Good luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.