Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are seeking a skilled and motivated AWS Backup Specialist to design, implement, and manage AWS native backup solutions, including disaster recovery strategies, while supporting a range of AWS services such as EC2, S3, EFS, and RDS. The candidate will have hands-on experience in creating scalable, secure, and cost-efficient backup and DR strategies, along with expertise in AWS services. Experience with Azure and GCP is considered an added advantage. Key Responsibilities Backup Management and Disaster Recovery (Primary) Design and implement AWS native backup and restore solutions using AWS Backup, Amazon S3, RDS, EFS, and associated services. Develop and deploy Disaster Recovery (DR) strategies, ensuring compliance with RTO (Recovery Time Objectives) and RPO (Recovery Point Objectives). Configure automated backup policies for EC2 instances, RDS databases, EFS, and EBS volumes to meet organizational requirements. Implement cross-region and cross-account backup solutions for enhanced resilience and security. Periodically perform data restoration tests to validate the effectiveness of backup and DR strategies. Monitor and maintain the performance of backup solutions, addressing failures or inconsistencies proactively. Leverage AWS infrastructure automation tools (e.g., AWS CLI, CloudFormation, Terraform) to streamline backup and DR processes. Ensure backup and DR solutions adhere to compliance, governance, and security standards. AWS Services (Secondary) Manage and optimize AWS EC2 instances, including configuration, monitoring, and troubleshooting. Design, configure, and secure scalable S3 buckets for storage, backups, and lifecycle management. Handle RDS provisioning, backup configurations, scaling, and troubleshooting. Work with VPC, IAM, and CloudWatch to maintain secure and well monitored infrastructure. Required Skills and Qualifications2 to 5 years of experience in AWS environments with a focus on backup, recovery, and DR solutions. Strong expertise in AWS Backup, S3, EC2, RDS, and EFS. Proven experience in designing and implementing Disaster Recovery (DR) solutions in AWS. Familiarity with cross-region and cross-account backup architectures. Knowledge of infrastructure automation using AWS CLI, CloudFormation, or Terraform. Proficiency in monitoring tools (e.g., CloudWatch, AWS Config). Experience with scripting (e.g., Python, Bash, or PowerShell) for automation Multi-Cloud Platforms: Experience with Azure Backup and Recovery services, as well as GCPs backup and DR solutions. Cross Cloud DR Strategies: Knowledge of designing DR solutions across hybrid or multi cloud environments. Migration Experience: Experience in migrating backup and DR solutions from on premises to cloud or between cloud providers. Preferred QualificationsCertifications: AWS Solutions Architect Associate, AWS Certified SysOps Administrator, or similar. Additional Cloud Services: Familiarity with services like Lambda, DynamoDB, and EKS. Compliance Knowledge: Understanding of data encryption, compliance, and security standards. Soft Skills Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Ability to handle and prioritize multiple tasks in a dynamic environment. Show more Show less
Posted 3 weeks ago
40.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
Remote
For more than 40 years, Accelya has been the industry’s partner for change, simplifying airline financial and commercial processes and empowering the air transport community to take better control of the future. Whether partnering with IATA on industry-wide initiatives or enabling digital transformation to simplify airline processes, Accelya drives the airline industry forward and proudly puts control back in the hands of airlines so they can move further, faster. Senior Engineer .NET Developer Pune Role Purpose As Senior Engineer .NET Developer, you will direct the development and execution of Incubation team, strategies and action plans. You will plan and direct. The successful candidate will need to ensure strong experience in developing software solutions using .NET Core and expertise in AWS technologies, including DynamoDB, Athena, EventBridge, Katalon, and Lambda. This position will be responsible for actively contributing to sprint design, and providing mentorship to team members. The focus will be on software development, technical design and architecture . In line with Accelya´s global business strategy, values and missions. What will you do? Managing Software Development- Design, code, and implement software solutions using .NET Core and related technologies. Develop high-quality, scalable, and maintainable software applications. AWS Services Integration-Utilize AWS services such as DynamoDB, Athena, EventBridge, Lambda, S3, and others to design and implement robust and efficient solutions. Integrate software applications with these services, ensuring optimal performance and scalability. Collaboration and Leadership-Work closely with cross-functional teams, including product managers, designers, and other engineers, to understand requirements and align development efforts. Provide technical leadership, mentorship, and guidance to junior team members. Technical Design and Architecture: - Collaborate with architects and senior engineers to define the technical vision, system architecture, and integration of AWS services. Contribute to the design and implementation of software components, ensuring scalability, performance, and adherence to best practices. Troubleshooting and Support- Investigate and resolve complex technical issues that arise during development or in production environments. Provide support and assistance to end-users or clients, ensuring the smooth operation of software applications. Knowledge, Experience & Skills Bachelor's degree in Computer Science, Software Engineering, or a related field. 5+ years of hands-on experience in software development using .NET Core, with a strong focus on projects leveraging AWS technologies. Expertise in AWS services, including DynamoDB, Athena, EventBridge, Katalon, and Lambda. Proficiency in Agile methodologies, including Scrum or Kanban, and experience working in an Agile environment. Experience with test automation and continuous integration/continuous deployment (CI/CD) pipelines is a plus. AWS certifications, such as AWS Certified Developer - Associate or AWS Certified Solutions Architect, are highly desirable. What do we offer? Open culture and challenging opportunity to satisfy intellectual needs Flexible working hours Smart working: hybrid remote/office working environment Work-life balance Excellent, dynamic and multicultural environment About Accelya Accelya is a leading global software provider to the airline industry, powering 200+ airlines with an open, modular software platform that enables innovative airlines to drive growth, delight their customers and take control of their retailing. Owned by Vista Equity Partners long-term perennial fund and with 2K+ employees based around 10 global offices, Accelya are trusted by industry leaders to deliver now and deliver for the future. The company´s passenger, cargo, and industry platforms support airline retailing from offer to settlement, both above and below the wing. Accelya are proud to deliver leading-edge technologies to our customers including through our partnership with AWS and through the pioneering NDC expertise of our Global Product teams. We are proud to enable innovation-led growth for the airline industry and put control back in the hands of airlines. For more information, please visit www.accelya.com What does the future of the air transport industry look like to you? Whether you’re an industry veteran or someone with experience from other industries, we want to make your ambitions a reality! Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for a Technical Architect who will be the driving force behind the architectural evolution of our SaaS platform. This is a hands-on leadership role, where you'll work closely with engineers and product teams to shape the technical direction while mentoring and supporting development teams. Key Responsibilities * Architecture & Design: Define and evolve the architectural vision of our SaaS platform with a strong focus on scalability, performance, and security. * Hands-on Contribution: Write clean, scalable, and secure code (primarily in Go and TypeScript/React) as needed to support teams or prove architectural concepts. * Mentorship & Guidance: Provide technical mentorship and architectural guidance to development teams, fostering a culture of engineering excellence and continuous learning. * Cloud Expertise: Lead cloud architecture on AWS, leveraging services like API Gateway, DynamoDB, Lambda, ECS, S3, CloudWatch, etc. * Security & Compliance: Drive security-first development practices and ensure adherence to industry standards such as SOC 2, ISO 27001, and GDPR. * Automation & Tooling: Drive adoption of automation, DevOps practices, and security tooling. * Code & Design Reviews: Conduct regular reviews to ensure high quality, maintainability, and consistency with architectural principles. * Collaboration: Work closely with product management and engineering leadership to align technical direction with business goals. Requirements * 10+ years of software engineering experience with at least 3-5 years in an architectural or technical leadership role. * Proven experience in building scalable SaaS applications with modern cloud technologies (preferably AWS). * Deep expertise in cloud-native and serverless architectures (e.g., AWS Lambda, DynamoDB, S3, API Gateway). * A programming background in Go (Golang) and React/TypeScript is preferred, but not essential. * Excellent understanding of application security, compliance frameworks (SOC 2, ISO 27001, GDPR), and secure software development lifecycle (SSDLC) practices. * Practical mindset—able to balance ideal architecture with real-world constraints. * Strong communication skills and a passion for mentorship and team development. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title: Senior Software Engineer About SailPoint: SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less. About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Engineering team does just that. Our engineering is where high-quality professional engineering meets individual impact. Our team creates products are built on a mature, cloud-native event-driven microservices architecture hosted in AWS. SailPoint is seeking a Senior Software Engineer to help develop Internal Applications. We are looking for well-rounded backend or full stack engineers who are passionate about SaaS Development, Microservices, web services. Responsibilities Produce design document, analysis document, architecture diagram and rough estimates, and develop features based on product requirements. Participate in team grooming and planning activities Work with the team lead and manager to influence priority for technical items Responsible for code quality of delivered items by performing unit, integration and development testing Contribute to training, onboarding of new resources. Give Product demos to customers/internal stakeholders. Contribute to resolving customer queries/escalations. Creating new Environments as required. Requirements Minimum 5+ years of Experience as well-rounded backend or full stack Engineer Minimum 1-3 years of Experience in Golang and associated frameworks Experience using ReactJS and associated frameworks Experience developing web services, RESTful APIs, RESTful API testing Experience using microservices in multi-tenant SaaS application Experience using SQL/NoSQL, EKS, Kafka, Redis Experience using logging, monitoring, alerting, visualization tools like OpenSearch, Prometheus, Grafana Experience working with remote teams (US time zones) Good to have automation experience handling automation frameworks, backend API and UI automation. Good to have knowledge and testing experience with Amazon AWS (S3, Lambda, DynamoDB, CloudWatch etc). Good to know the Docker and its deployment along with containers spin up, grid, scaling etc. Should have strong analytical skills, attention to details and excellent troubleshooting/problem solving skills to address complex technical problems Team player with strong communications skills, excellent organizational and planning skills, ability to work on multiple tasks concurrently. Good to have experience working with JIRA for Agile Development, Defect Management. Nice to have Experience with Backstage Experience with Continuous Delivery What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your development environment. Seek to deeply understand the technology or common engineering challenges Take on and deliver your first work tasks. By 90 days: Proactively implement different enhancements, defect fixes by interacting independently with different (sometimes many) stakeholders, architects and members of your team. Take a committed approach to contributing to different projects development alongside less experienced engineers on your team—there’s no room for ivory towers here. By 6 months: Collaborates with Product Management and Engineering Manager to estimate and develop small to medium complexity features more independently. Lead projects with small group of 3-4 members. Participate in resource planning, backlog refinement activity. Occasionally serve as an analysis expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. About SailPoint India and our Benefits: Nestled in the heart of Pune, a bustling hub of technology and culture, the office exemplifies SailPoint's commitment to excellence. Surrounded by a vibrant atmosphere, the Pune office serves as a strategic center for the company, where cutting-edge solutions are crafted and implemented to address the ever-evolving challenges in identity security. With a team of highly skilled professionals, the office embodies SailPoint's values of Integrity, Innovation, Impact and Individuals. Our Pune team works under a hybrid model enjoying the office 3 days a week (unless otherwise specified). We provide excellent office amenities, competitive salaries and strong benefits: Our benefits program offers medical insurance for employees and their dependents, accident insurance and term life insurance for all employees. All premiums are paid by SailPoint. Company sponsored health-checkups for employees and discounted rates for dependents Annual performance bonus 24 Leaves every year in addition, 10 holidays Flexible Work hours SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. Show more Show less
Posted 3 weeks ago
60.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About Yodaplus Technologies Founded in 2016, Yodaplus Technologies is a cutting-edge technology solutions company specializing in secure, scalable innovations across FinTech, Enterprise Blockchain, and Supply Chain sectors. With a leadership team bringing over 60 years of combined expertise in capital markets and BFSI, we pride ourselves on delivering world-class AI-driven trade finance platforms, digital transformation services, and supply chain optimizations for global clients across the US, Singapore, UAE, and India. Our flagship solutions, including DocuTrade and AI-powered GenRPT, are revolutionizing operational visibility and efficiency for logistics and enterprise clients worldwide. At Yodaplus, our culture is driven by passion, transparency, integrity, and relentless innovation—we empower our teams to unlearn, relearn, and push the boundaries of technology. Why Join Us? · Accelerate your career growth: Take ownership of impactful projects, contribute to next-gen cloud-native applications, and fast-track your path toward senior leadership roles. · Work with emerging technologies: Build scalable, serverless applications using React, AWS services, Python, and more. · Collaborative & supportive environment: Thrive in an engineering-driven culture that encourages continuous learning, mentorship, and cross-functional collaboration. · Global exposure: Work alongside diverse teams and clients across multiple geographies, gaining unparalleled domain and technical expertise. · Excellent perks & benefits: Competitive salary, flexible working hours, remote work options, professional development budgets, health insurance, wellness programs, and more Your Role As a Senior Software Engineer at Yodaplus, you will: · Design, and develop modern, high-performance cloud-native applications using React (TypeScript/JavaScript), Python, and AWS serverless technologies. · Build and implement scalable backend services leveraging Python, AWS Lambda, EventBridge, AppSync, S3, Glue, and other cloud-native tools. · Drive best practices around code quality, security, performance, and deployment automation. · Mentor junior engineers, foster knowledge sharing, and collaborate with product and business teams to deliver innovative solutions. · Support CI/CD pipelines, troubleshoot production issues, and contribute to ongoing infrastructure and automation improvements. · Ensure application reliability, security, performance, and maintainability across all layers of the stack. · Support continuous integration and deployment pipelines using Git-based workflows, Docker, and AWS SAM. · Assist with troubleshooting, debugging, and resolving production issues. · Contribute to automation and infrastructure improvements to enhance deployment efficiency and system reliability. What We’re Looking For · Bachelor’s degree in Computer Science, Engineering, or a related field. · 2-4 years of experience in full-stack software development with a strong focus on React, Python, and AWS serverless. · Hands-on experience with AWS SAM, Lambda, API Gateway, DynamoDB, S3, and CloudWatch. · Comfortable working with REST/GraphQL APIs and relational/non-relational databases (PostgreSQL, DynamoDB). · Passionate about scalable, maintainable code and delivering high-impact software solutions. · Strong communication skills and a collaborative mindset. Ready to grow with us? Join Yodaplus Technologies and be part of a visionary team shaping the future of fintech and enterprise blockchain solutions. Apply now and unlock your potential! Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Amagi We are a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute, and monetize live linear channels on Free Ad-supported Streaming TV and video services platforms. Amagi also offers 24x7 cloud-managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 700+ content brands, 800+ playout chains, and over 2500 channel deliveries on its platform in over 40 countries. Amagi has a presence in New York, Los Angeles, Toronto, London, Paris, Melbourne, Seoul, Singapore, and broadcast operations in New Delhi, and an innovation center in Bangalore. For more information visit us at www.amagi.com About The Team We are a business operating systems team and it is easy to define what we do by our charter:- # 100% of Amagi provisioned resources are aligned with sold SKU (Sold = Deployed = Billed) # 100% of Amagi hosted services is provisioned through traceable methods # 100% accurate and automated billing for subscription and consumption based usages Well, this is not rocket science to build! Also, it is not a smooth sail through still water given the historical blunders that we have collectively committed. So what are we trying to do - fix the past and fix for the future. We are developing a bunch of tiny, micro and full scale services to manage the business workflows within Amagi. And the languages include python, Go, Javascript/Typescript and of course our own human language. We need software engineer, not just software developer We need someone who will listen and articulate well We need someone who will figure out what to do on one's own We need someone who will make the job interesting, instead of looking for an interesting job Position - Software Engineer and Senior Software Engineer Location: Bangalore Role Reporting into: Engineering Manager Does this role have direct reports?: No Job Responsibilities: You will be responsible for Designing and coding right solutions starting with broadly defined problems in the broadcast domain Designing and writing highly available, RESTful, scalable and distributed backend applications using modern programming languages (like python, golang, ruby), database systems (modern sql/nosql DBs, REDIS, MySql, DynamoDB, MongoDB, etc), messaging/communication frameworks (Pubnub, ZeroMQ, gRPC, REST) o and orchestration systems (Docker, Kubernetes,) Developing micro services running on edge servers, private clouds or public cloud platforms like AWS and GCP End to end responsibility which includes, gathering engineering requirements, designing solutions, implementing and writing reusable, testable, and efficient code, testing and building test frameworks for your own applications, writing frameworks for deploying your applications taking part in peer code reviews and mentoring new people and freshers Driving best practices and engineering excellence Working with other team members to develop the architecture and design of new and current systems Working in an agile environment to deliver high quality software Working closely with quality assurance teams and devops/ops teams to take your product to deployment Requirements You should have: Good learning ability to grasp new domains and comfort to understand both depth and breadth across the technology platform(s) Good written and oral communication skills to enable effective coordination and implementation across the organization Bachelor's Degree or Master's Degree in Computer Science or related field A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design Proficiency in, at least, one modern high level programming language such as Python, Golang, Java or Ruby Expertise in Linux fundamentals Preferably, experience in AWS services like S3, EC2, EBS, EKS or equivalent services in GCP or Azure Preferred Work Experience: 2 - 6 Years Education/Qualifications: BE/BTech/MTech Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
About The Role We are looking for a highly skilled and autonomous Backend Engineer with deep expertise in Python, microservices architecture, and API design to join a high-impact engineering team working on scalable internal tools and enterprise SaaS platforms. You will play a key role in system architecture, PoC development, and cloud-native service delivery, collaborating closely with cross-functional teams. Key Responsibilities Design and implement robust, scalable microservices using Python and related frameworks. Develop and maintain high-performance, production-grade RESTful APIs and background jobs. Lead or contribute to PoC architecture, system modularization, and microservice decomposition. Design and manage relational and NoSQL data models (PostgreSQL, MongoDB, DynamoDB). Build scalable, async batch jobs and distributed processing pipelines using Kafka, RabbitMQ, and SQS. Drive best practices around error handling, logging, security, and observability (Grafana, CloudWatch, Datadog). Collaborate across engineering, product, and DevOps to ship reliable features in cloud environments (AWS preferred). Contribute to documentation, system diagrams, and CI/CD pipelines (Terraform, GitHub Actions). Requirements 8+ years of hands-on experience as a backend engineer Strong proficiency in Python (Flask, FastAPI, Django, etc.) Solid experience with microservices architecture and containerized environments (Docker, Kubernetes, EKS) Proven expertise in REST API design, rate limiting, security, and performance optimization Familiarity with NoSQL & SQL databases (MongoDB, PostgreSQL, DynamoDB, ClickHouse) Experience with cloud platforms (AWS, Azure, or GCP AWS preferred) CI/CD and Infrastructure as Code (Jenkins, GitHub Actions, Terraform) Exposure to distributed systems, data processing, and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills Bonus: Experience integrating with tools like Zendesk, Openfire, or ticketing/chat systems Preferred Qualifications Bachelors or Masters degree in Computer Science or related field Certifications in System Design or Cloud Architecture Experience working in agile, distributed teams with a strong ownership mindset (ref:hirist.tech) Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Data Architect, you will design and implement scalable, cloud-native data solutions that handle petabyte-scale datasets. You will lead architecture discussions, build robust data pipelines, and work closely with cross-functional teams to deliver enterprise-grade data platforms. Your work will directly support analytics, AI/ML, and real-time data processing needs across global clients. Key Responsibilities Translate complex data and analytics requirements into scalable technical architectures. Design and implement cloud-native architectures for real-time and batch data processing. Build and maintain large-scale data pipelines and frameworks using modern orchestration tools (e.g., Airflow, Oozie). Define strategies for data modeling, integration, metadata management, and governance. Optimize data systems for cost-efficiency, performance, and scalability. Leverage cloud services (AWS, Azure, GCP) including Azure Synapse, AWS Redshift, BigQuery, etc. Implement data governance frameworks covering quality, lineage, cataloging, and access control. Work with modern big data technologies (e.g., Spark, Kafka, Databricks, Snowflake, Hadoop). Collaborate with data engineers, analysts, DevOps, and business stakeholders. Evaluate and adopt emerging technologies to improve data architecture. Provide architectural guidance in cloud migration and modernization projects. Lead and mentor engineering teams and provide technical thought Skills and Experience : Bachelor's or Masters in Computer Science, Engineering, or related field. 10+ years of experience in data architecture, engineering, or platform roles. 5+ years of experience with cloud data platforms (Azure, AWS, or GCP). Proven experience building scalable enterprise data platforms (data lakes/warehouses). Strong expertise in distributed computing, data modeling, and pipeline optimization. Proficiency in SQL and NoSQL databases (e.g., Snowflake, SQL Server, Cosmos DB, DynamoDB). Experience with data integration tools like Azure Data Factory, Talend, or Informatica. Hands-on experience with real-time streaming technologies (Kafka, Kinesis, Event Hub). Expertise in scripting/programming languages such as Python, Spark, Java, or Scala. Deep understanding of data governance, security, and regulatory compliance (GDPR, HIPAA, CCPA). Strong communication, presentation, and stakeholder management skills. Ability to lead multiple projects simultaneously in an agile environment. (ref:hirist.tech) Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Join our team and what we’ll accomplish together The Wireless Core Network Development team is responsible for End to End network architecture, development, and operations including service orchestration and automation. The team designs, develops, maintains, and supports our Core Wireless Network and all its services specific to our customer data. We work as a team to introduce the latest technology and software to enable network orchestration and automation in the fast evolving 5G ecosystem, and propel TELUS’ digital transformation. Our team creates previously impossible solutions by leveraging the new approach to enable our customers unique and rich wireless experiences. These innovative solutions will improve the life quality of thousands while revolutionizing how everything and everyone connects. You will own the customer experience by providing strategy, managing change and leveraging best in class security and AI to deliver reliable products to our customers. This will represent a fundamental change on how the Telecom industry works, opening the possibility of making private cellular networks globally available, sparking innovation and enabling access to the digital world to more people by providing never seen reliability at reduced costs. What you'll do Overall responsibility for the architecture, design and operational support of TELUS subscriber database solutions (HLR, HSS, EIR, IMEIDB, UDM, UDR); This includes but is not limited to understanding fully how the current network is architected & identifying areas of improvement/modernization that we need to undertake driving reliability and efficiency in the support of the solution Help us design, develop, and implement software solutions supporting the subscriber data platforms within the 5G core architecture.. This will include management, assurance and closed-loop of the UDM, AUSF and SDL which will reside on a cloud native services Bring your ideas, bring your coding skills, and bring your passion to learn Identify E2E network control signaling and roaming gap, available and ongoing design, together with architecting future-friendly solutions as technology evolves Collaborate with cross functional teams from Radio, Core, Transport, Infrastructure, Business and assurance domain, define migration strategies for moving services to cloud. Bring your experience in Open API, security, configuration, data model management and processing Node JS, and learn or bring your experience in other languages like RESTful, JSON, NETCONF, Apache Nifi, Kafka, SNMP, Java, Bash, Python, HTTPS, SSH TypeScript and Python Maintain/develop Network Architecture/Design document Additional Job Description What you bring: 5+ years of telecommunication experience Experienced in adapter API design using RESTful, NETCONF, interested in developing back-end software Proven knowledge of technologies such as Service Based Architecture (SBA), Subscriber Data Management functions, http2, Diameter, Sigtran, SS7, and 5G Protocol General understanding of TCP/IP networking and familiarity with TCP, UDP, SS7 RADIUS, and Diameter protocols along with SOAP/REST API working principles Proven understanding of IPSEC, TLS 1.2, 1.3 and understanding of OAUTH 2.0 framework 2 + years’ experience as a software developer, advanced technical and analytical skills, and the ability to take responsibility for the overall technical direction of the project Experience with Public Cloud Native Services like Openshift, AWS, GCP or Azure Expert knowledge in Database redundancy, replication, Synchronization Knowledge of different database concepts (relational vs non-relational DB) Subject Matter Expert in implementing, integrating, and deploying solutions related to subscriber data management (HLR, HSS, EIR, IMEIDB, UDM, UDR,F5, Provisioning GW, AAA on either private cloud or public cloud like AWS, OCP or GCP Expert knowledge of the software project lifecycle and CI/CD Pipelines A Bachelor degree in Computer Science, Computer Engineering, Electrical Engineering, STEM related field or relevant experience Great-to-haves: Understanding of 3GPP architectures and reference points for 4G and 5G wireless networks Knowledge of 3GPP, TMF, GSMA, IETF standard bodies Experience with Radio, Core, Transport and Infrastructure product design, development, integration, test and operations low level protocol implementation on top of UDP, SCTP, GTPv1 and GTPv2 Experience with MariaDB, Cassandra DB, MongoDB and Data Model Management AWS Fargate, Lambda, DynamoDB, SQS, Step Functions, CloudWatch, CloudFormation and/or AWS Cloud Development Kit Knowledge of Python, and API development in production environments Experience with containerization tools such as Docker, Kubernetes, and/or OpenStack technology Soft Skills: Strong analytical and problem-solving abilities Excellent communication skills, both written and verbal Ability to work effectively in a team environment Self-motivated with a proactive approach to learning new technologies Capable of working under pressure and managing multiple priorities EEO Statement At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics. Utilize Python for data processing and transformation tasks, ensuring efficient and reliable data workflows. Implement data orchestration and workflow automation using Apache Airflow. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS. Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure. Develop and optimize data models to support business intelligence and analytics requirements. Work with graph databases to model and query complex relationships within data. Create and maintain interactive and insightful reports and dashboards using Tableau to support data-driven decision-making. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Implement AWS enterprise solutions, including Redshift, S3, EC2, Data Pipeline, and EMR, to enhance data processing capabilities. Work hands-on with SPARK to manage and process large datasets efficiently. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience programming skills in Python, with basic knowledge of Java. Experience with Apache Airflow for data orchestration and workflow management. Familiarity with container orchestration platforms such as Kubernetes (EKS) and Amazon ECS. Experience with Terraform for infrastructure as code and cloud resource management. Proficiency in data modeling techniques and best practices. Exposure to graph databases and experience in modeling and querying graph data. Experience in creating reports and dashboards using Tableau. Experience with AWS enterprise implementations, including Redshift, S3, EC2, Data Pipeline, and EMR. Hands-on experience with SPARK and managing large datasets. Experience in implementing ETL transformations on big data platforms, particularly with NoSQL databases (MongoDB, DynamoDB, Cassandra). Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US Show more Show less
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and data integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. Responsibilities Competencies: System Requirements Engineering: Translate stakeholder needs into verifiable requirements; establish acceptance criteria; track status throughout the system lifecycle; assess impact of changes. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess solution outcomes using ITOM, SDLC standards, tools, metrics, and KPIs. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using SDLC standards and metrics. Data Quality: Identify, understand, and correct data flaws to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Nice To Have Experience Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Familiarity with clustered compute cloud-based implementations. Experience developing applications requiring large file movement for a cloud-based environment. Exposure to building analytical solutions and IoT technology. Work Environment Most work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours as needed. This role will be Hybrid. Qualifications Experience: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with Spark Streaming, Spark Internals, and Query Optimization. Proficiency in Azure Cloud Services. Experience in Agile Development and Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Java Developer Software Engineer II at JPMorgan Chase within the Corporate technology team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Design and develop complex and scalable coding frameworks using appropriate software design methodologies. Produce secure, high-quality production code and review and debug code written by peers. Identify opportunities to eliminate or automate the remediation of recurring issues to enhance operational stability. Contribute to the engineering community by advocating for firmwide frameworks, tools, and practices. Develop innovative software solutions and resolve technical problems with unconventional approaches. Contribute to a team culture of diversity, equity, inclusion, and respect. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 2+ years applied experience. Demonstrated hands-on experience with Java, Spring/Spring Boot, Python, and SQL-related technologies. Proven ability in delivering system design, application development, testing, and ensuring operational stability. Proficient in DevOps practices, automated testing, and continuous integration/deployment tools such as Jenkins, Docker, and Kubernetes. Hands-on experience with data lake or data warehouse technologies, including Databricks, Java, Python, and Spark. Demonstrated proficiency in software applications and technical processes within a technical discipline, such as cloud computing (AWS, PCF, etc.). Proficiency in databases such as Oracle, CockroachDB or DynamoDB, as well as other similar databases. Comprehensive understanding of all aspects of the Software Development Life Cycle. Advanced knowledge of agile methodologies, including CI/CD, application resiliency, and security. Preferred Qualifications, Capabilities, And Skills Exposure to AWS cloud, Terraform, Databricks/Snowflake through hands-on experience or certification. Advanced proficiency in other programming languages, such as Java and Python. ABOUT US Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Cloud, Data Mesh, event-driven systems and high volume data management are some of the patterns & technologies which are driving the future architecture at JPMC. As a Lead Software Engineer at JPMorgan Chase within the Secure Insight team, you will have the opportunity to impact your career and push the limits of what's possible. You will be at the forefront of maturing patterns and technologies such as Cloud, Data Mesh, event-promoten systems, and high volume data management. You will be responsible for executing creative software solutions, developing secure high-quality production code, and leading evaluation sessions with external vendors, startups, and internal teams. You will also act as a coach and mentor to team members on their assigned project tasks. This role provides an opportunity to solve complex production use cases and contribute to a culture of diversity, equity, inclusion, and respect. Job Responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Adds to team culture of diversity, equity, inclusion, and respect Act as the coach and mentor to team members on their assigned project tasks Develop a cohesive data engineering team and ensure their continued success Conduct product work reviews with team members Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience with AWS enterprise implementations including Redshift, S3, EC2, Data Pipeline, & EMR Hands-on experience in working with SPARK and handling terabyte size datasets Programming Experience in Java Development and/or expereince of Python experience Experience in implementing complex ETL transformations on big data platform like NoSQL databases (Mongo, DynamoDB, Cassandra) Hands-on practical experience delivering system design, application development, testing, and operational stability Advanced in one or more programming language(s) Proficiency in automation and continuous delivery methods In-depth knowledge of the financial services industry and their IT systems Practical cloud native experience Preferred Qualifications, Capabilities, And Skills Analyze, design and code business-related solutions, as well as core architectural changes, using an Agile programming approach resulting in software delivered on time and in budget Comfortable learning cutting edge technologies and applications to greenfield project Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are actively seeking an exceptionally motivated individual who thrives on continuous learning and embraces the dynamic environment of a high-velocity team. Joining the Content Productization & Delivery (CPD) organization at Thomson Reuters, you will play a pivotal role in ensuring the quality, reliability, and availability of critical systems. These systems provide a suite of infrastructure services supporting a common set of search and information retrieval capabilities necessary for Thomson Reuters's research-based applications and APIs across its core products. Your responsibilities will encompass delivering content via shared services that underpin all our Tax and Legal Research products. About the role: In this opportunity as a Senior Software Engineer, you will : Actively participates and collaborates in meetings, processes, agile ceremonies, and interaction with other technology groups. Works with Lead Engineers and Architects to develop high performing and scalable software solutions to meet requirement and design specifications. Provides technical guidance, mentoring, or coaching to software or systems engineering teams that are distributed across geographic locations. Proactively share knowledge and best practices on using new and emerging technologies across all the development and testing groups. Assists in identifying and correcting software performance bottlenecks. Provides regular progress and status updates to management. Provides technical support to operations or other development teams by assisting in troubleshooting, debugging, and solving critical issues in the production environment promptly to minimize user and revenue impact. Ability to interpret code and solve problems based on existing standards. Creates and maintains all required technical documentation / manual related to assigned components to ensure supportability. About You: You're a fit for the role of Senior Software Engineer, if your background includes: Bachelor’s or master’s degree in computer science, engineering, information technology or equivalent experience 7+ years of professional software development experience 3+ years of experience with Java and REST based services 3+ years of Python experience Ability to debug and diagnose issues. Experience with version control (Git, GitHub) Experience working with various AWS technologies (DynamoDB, S3, EKS) Experience with Linux Infrastructure as Code, CICD Pipelines Excellent and creative problem-solving skills Strong written and oral communication skills Knowledge of Artificial Intelligence AWS Bedrock, Azure Open AI Large Language Models (LLMs) Prompt Engineering What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Team: TMS & WMS (Bangalore) Key Accountabilities Design, implement, test, deploy and maintain innovative software solutions to transform service performance, durability, cost, and security. Use software engineering best practices to ensure a high standard of quality for all of the team deliverables. Write high quality distributed system software. Work in an agile, startup-like development environment, where you are always working on the most important stuff. In this role you will lead a critical and highly-visible function within DP World International Expansion Business. You will be given the opportunity to autonomously deliver the technical direction of the service, and the feature roadmap. You will work with extraordinary talent and have the opportunity to hire and shape the team to best execute on the product. Other Act as an ambassador for DP World at all times when working; promoting and demonstrating positive behaviors in harmony with DP World’s Founder’s Principles, values and culture; ensuring the highest level of safety is applied in all activities; understanding and following DP World’s Code of Conduct and Ethics policies Perform other related duties as assigned Qualifications, Experience And Skills Basic qualifications Bachelor’s Degree in Computer Science or related field, or equivalent experience to a Bachelor's degree based on 3 years of work experience for every 1 year of education 2-4 years professional experience in software development; you will be able to discuss in depth both the design and your significant contributions to one or more projects Solid understanding of computer science fundamentals: data structure, algorithm, distributed system design, database, and design patterns. Strong coding skills with a modern language (ReactJs, Java, etc) Experience working in an Agile/Scrum environment and DevOps automation REST, JavaScript/Typescript, Node, GraphQL, PostgreSQL, MongoDB, Redis, Angular, ReactJS, Vue, AWS, machine learning, geolocation and mapping API Preferred Qualifications Experience with distributed system performance analysis and optimization Familiar with AWS services (RDS, DynamoDB, Lamda, Kinesis, SNS, CloudWatch, …) Experience in NLP, Deep learning & Machine Learning. Experience in training machine learning models or developing machine learning infrastructure. Strong communications skills; you will be required to proactively engage colleagues both inside and outside of your team Ability to effectively articulate technical challenges and solutions Deal well with ambiguous/undefined problems; ability to think abstractly Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Job Title: Backend Developer Intern Job ID: 0472 Work Mode: Remote Experience Required: Fresher Stipend: ₹20,000 per month About The Role We’re looking for a Backend Developer Intern who can learn fast, ship faster, and isn’t afraid to work with production-level code. You’ll dive deep into the core of our AI infrastructure — building APIs, microservices, and automations using Python and AWS Lambda. If you’re someone who figures things out before the tutorial ends, we want to meet you. What You’ll Work On Develop and maintain serverless functions on AWS Lambda. Write clean, scalable backend code in Python. Integrate third-party APIs (Freshdesk, Shopify, WhatsApp, etc.). Work with backend systems involving queues, caching, logging, and error handling. Take full ownership of modules — from spec to deployment. Debug fast, learn faster, and ship the fastest. Must-Have Skills Strong fundamentals in Python and HTTP APIs. Exposure to AWS services like Lambda, API Gateway, DynamoDB, S3, etc. Basic understanding of RESTful architecture. Solid grasp of Git and command-line tools. A natural curiosity and a self-starter attitude — you break things, fix them, and learn. Nice-to-Have (Bonus Points) Experience with async task queues (e.g., Celery, SQS). Fluency with Swagger or Postman for API testing. Previous internship or personal backend projects. Comfort working independently with minimal hand-holding. Why Join Us? Work on real production code, not just toy assignments. Learn directly from founders and senior engineers. Enjoy 100% ownership of your code — and the fast feedback that comes with it. Be part of a mission-driven team building tools people actually use and love. Note: This is a paid internship.Skills: aws,python,restful apis,http apis,git,restful architecture,command-line tools,aws lambda Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Description Role Proficiency: Owns the overall delivery providing expert consultancy to developers and leads in the area of specialization; ensuring process level and customer level compliance. Outcomes Promote technical solutions which support the business requirements within the area of expertise. Ensures IT requirements are met and service quality maintained when introducing new services. Considers the cost effectiveness of proposed solution(s). Set FAST goals and provide feedback to FAST goals of mentees Innovative and technically sound in analyzing projects in depth Define and evaluate standards and best practices for the technology area of expertise. Collaborate with architects by helping them in choosing the technology tools for solutioning. Proactively suggest new technologies for improvements over the existing technology landscape. Leads technical consultancy assignments which involve specialists from various disciplines; taking responsibility for quality timely delivery and the appropriateness of the teams'; recommendations Make recommendations on how to improve the effectiveness efficiency and delivery of services using technology and methodologies. Measures Of Outcomes Adherence to engineering process and standards (coding standards) Defined productivity standards for project Schedule Adherence Mandatory Trainings/Certifications Innovativeness (In terms of how many new ideas/thought processes/standards/best practices he/she has come up with) Maintain quality standards for individual and team Adhere to project schedules for individual and team Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of noncompliance issues On time completion of mandatory compliance trainings Adhere to organizational policies and processes Code Outputs Expected: Independently develop code for the above Define and maintain technical standards and best practices Configure Implement and monitor configuration process Test Review unit test cases scenarios and execution Documentation Sign off templates checklists guidelines standards for design/process/development Sign off deliverable documents; design documentation Requirements test cases and results Manage Defects Perform defect RCA and mitigation Design Creation of design (HLD)architecture for Applications/feature/Business Components/Data Models Interface With Customer Proactively influence customer thought process Consider NPS Score for customer and delivery performance Certifications Forecast the roadmap for future technical certifications Domain Relevance Develop features and components with thorough understanding of the business problem being addressed for the client Manage Project Technically overseeing and taking ownership of end to end project lifecycle Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Contribute to sharing knowledge upskilling in TICL GAMA etc Mentoring and training within the account and the organization. Assists Others In Resolving Complex Technical Problems Manage all aspects of problem management activities investigating the root cause of problems and recommending SMART (specific measurable achievable realistic timely) solutions Development And Review Of Standards & Documentation Define software process improvement activities and communicate them to a range of individuals teams and other bodies. Leading Complex Projects Leads the technical activities in a significant or complex project or portfolio of project; accountable to the Technical Engineer Project Manager or Portfolio Manager for the delivery and quality of technical deliverables. Skill Examples Ability to provide expert opinions to business problems Proactively identifying solutions for technical issues Ability to create technical evaluation procedures Coaches and leads others in acquiring knowledge and provides expert advice Ability to translate conceptual solutions to technology solutions by choosing the best technical tools Ability to estimate project effort based on the requirement Perform and evaluate test results against product specifications Break down complex problems into logical components Interface with other teams designers and other parallel practices Set goals for self and team; provide feedback to team members Create and articulate impactful technical presentations Follow high level business etiquette in emails and other business communication Drive conference calls with customers and answer customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning and handle multiple tasks. Build confidence with the Customers by meeting the deliverables in time with quality. Ability to design a new system from scratch Capability to take up reengineering of existing systems by understanding the functionality Capability to estimate and present to client Ability to contribute ideas and innovations Knowledge Examples Deep level proficiency in the specialist area. Proficiency in technology stacks Appropriate software programs / modules Programming languages DBMS Operating Systems and software platforms SDLC Integrated development environment (IDE) Agile – Scrum or Kanban Methods Knowledge of customer domain and sub domain where problem is solved Knowledge of new technologies (e.g.; Data science AI/ML IoT big data etc cloud platforms) RDBMS and NOSQL Deep knowledge on architecting solutions and applications on cloud-based infrastructures. Additional Comments About the Role: We are seeking a highly skilled and motivated Senior APIgee Edge/X Developer to join our growing team. In this role, you will be responsible for designing, developing, and implementing APIs using Google Cloud's Apigee platform. You will play a key role in building and maintaining our API infrastructure, ensuring high availability, scalability, and security. The ideal candidate will have a strong understanding of API management concepts, cloud technologies, and experience with various databases and messaging systems. Responsibilities: Design, develop, and deploy APIs using Apigee Edge/X Implement API proxies, policies, and shared flows Build and maintain API documentation and specifications Troubleshoot and debug API issues Ensure API security and performance Collaborate with cross-functional teams to integrate APIs with backend systems Work with cloud technologies (e.g., AWS, GCP) to deploy and manage APIs Utilize and manage data in various databases (Oracle, DynamoDB, NoSQL) Integrate with messaging queues (Kafka, AWS SQS) Stay current with the latest API technologies and trends Provide support during US business hours for a few hours Qualifications: 5+ years of experience in API development and management Strong understanding of API concepts (REST, SOAP, GraphQL) Extensive experience with Apigee Edge/X platform Proficiency in API design and development tools (Swagger, Postman) Experience with cloud technologies (AWS, GCP) Knowledge of various databases (Oracle, DynamoDB, NoSQL) Experience with messaging queues (Kafka, AWS SQS) Excellent debugging and problem-solving skills Strong communication and collaboration skills Ability to work independently and as part of a team Bachelor's degree in Computer Science or related field Bonus Points: Experience with DevOps practices and tools (CI/CD, Jenkins, Git) Knowledge of security protocols (OAuth 2.0, JWT) Experience with containerization technologies (Docker, Kubernetes) Certifications in Apigee or related technologies Skills apigee,gateway,edgex Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview: Guidepoint’s Qsight group is a new, high-growth division focused on building market-leading data intelligence solutions for the healthcare industry. Operating like a start-up within a larger high-growth company, Qsight works with proprietary data to generate actionable insights for the world’s leading institutional investors and medical device and pharmaceutical companies. The Qsight team is passionate about creating market intelligence products through rigorous analysis of alternative data to deliver highly relevant, accurate insights to a global client base. Location: Pune - Hybrid. About The Role We are looking for a Backend Developer Intern to join our team. What You’ll Do: As an intern, you will have the opportunity to learn and grow in the following areas: AWS Cloud Services What you'll learn: Develop an understanding of cloud services by assisting in the creation and maintenance of AWS Lambda functions, using AWS CloudWatch for monitoring, and learning about AWS CloudFormation for deploying infrastructure as code. Why it’s cool: Gain hands-on experience with industry-standard cloud technologies that are transforming how businesses build scalable applications. Backend Development (Node.js) What you'll learn: Write and maintain backend services in Node.js, develop APIs, and integrate them with AWS and other cloud services. Learn how to test and debug these APIs to ensure they work seamlessly. Why it’s cool: Get real-world exposure to full-stack development and learn best practices for backend development with JavaScript, one of the most popular programming languages in the world. Database Management (DynamoDB & Geospatial Data) What you'll learn: Work with DynamoDB, a NoSQL database service, and understand its key-value data model. Explore how geospatial data is handled and learn how to design solutions for geospatial queries. Why it’s cool: You'll be introduced to cutting-edge database management practices and get to work with geospatial data, which is highly relevant in modern applications like mapping, tracking, and location-based services. Full-Text Search with OpenSearch What you'll learn: Learn how to integrate and optimize OpenSearch for full-text search capabilities, including indexing, querying, and managing large datasets. Why it’s cool: Understand how powerful search engines work and the importance of optimizing search queries to enhance performance in large-scale applications. API Development & Integration What you'll learn: Gain experience building and maintaining APIs that are scalable and efficient, while following best practices for asynchronous API design. Why it’s cool: You'll dive into real-world API development and testing using tools like Postman, which is an essential skill for any modern developer. Development Tools & Environment What you'll learn: Become proficient with industry-standard tools like Git for version control, Docker for containerization, and VSCode for code editing. Why it’s cool: You’ll learn how to work in a professional development environment and master tools that are crucial for software development. Collaboration What you'll learn: Work closely with experienced developers in a collaborative setting, participate in brainstorming sessions, and contribute to code reviews. Why it’s cool: Learn how a professional development team functions, improve your collaboration and communication skills, and get real-time feedback from senior developers. What You’ll Learn: Hands-on Experience with AWS Cloud Technologies – Work with services like AWS Lambda, CloudWatch, and CloudFormation. Backend Development Knowledge – Gain real-world experience developing, testing, and deploying backend applications with Node.js. Cloud Database & Geospatial Data – Learn to work with DynamoDB and gain insight into how geospatial data is managed in the cloud. Search Engine Technology – Explore full-text search with OpenSearch, and learn how to optimize large datasets for efficient search operations. API Design and Testing – Get practical knowledge of designing, building, and maintaining APIs. Industry Tools & Best Practices – Master tools like Git, Docker, and Postman that are widely used in the industry. Team Collaboration – Work in a supportive environment, attend code reviews, and improve your skills through team collaboration. Required Technical Qualifications: A Strong Desire to Learn – You don’t need to be an expert, but you should have a genuine interest in cloud technologies, backend development, and modern tools. Basic Knowledge of JavaScript – Familiarity with JavaScript or Node.js will be helpful but not required. Interest in Cloud Platforms (AWS) – An eagerness to learn about AWS services like Lambda, DynamoDB, and CloudWatch. Curiosity About APIs and Databases – A desire to dive into API development and gain knowledge about NoSQL databases and full-text search. Problem-Solving Skills – You enjoy troubleshooting and finding solutions to technical challenges. Communication Skills – You should be able to clearly express ideas and ask for help when needed. Team Player – You’ll be collaborating with others, so being able to work well in a team is key. Preferred Qualifications (Not Required): Experience with OpenSearch, Elasticsearch, or other search engines. Familiarity with CI/CD pipelines. Exposure to Docker or other containerization technologies. Knowledge of microservices architecture. What We Offer: Competitive compensation Employee medical coverage Central office location Entrepreneurial environment, autonomy, and fast decisions Casual work environment About Guidepoint: Guidepoint is a leading research enablement platform designed to advance understanding and empower our clients’ decision-making process. Powered by innovative technology, real-time data, and hard-to-source expertise, we help our clients to turn answers into action. Backed by a network of nearly 1.5 million experts and Guidepoint’s 1,300 employees worldwide, we inform leading organizations’ research by delivering on-demand intelligence and research on request. With Guidepoint, companies and investors can better navigate the abundance of information available today, making it both more useful and more powerful. At Guidepoint, our success relies on the diversity of our employees, advisors, and client base, which allows us to create connections that offer a wealth of perspectives. We are committed to upholding policies that contribute to an equitable and welcoming environment for our community, regardless of background, identity, or experience. Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. Mission description: Solution Architect PAY Digital engineering team has a team in charge of transversal activities around Security/Innovation/Automation/Cloud Transformation. As our strategy is to fully move to the cloud, we need to focus on new solution architecture for our products. Main mission : She/he is in charge of defining architecture patterns mainly in the cloud scope to help our teams to accelerate their cloud transformation. The role is not only around documenting these patterns but to experiment them and help the teams in their sprints to apply it. There’s also a large part of communication in the role to be able to share this expertise and help the team to take the ownership of the new architecture proposed. Technical skills / Environment : Responsibilities : You will be part of Thales CDI PAY Digital organization with a strong relation with the engineering director and all the squads around the world. Your role will mainly to study, synchronize, share advanced technical topics to support the teams in our location in Noida. You’ll have to be up to date in all the cloud technologies we’re using and the a learn the payment ecosystem we’re dealing with. Technical Skills: Software security: cryptography, PKI, Network, Web attack Software development: Web Applications, NodeJS, J2EE, Java Security, Web Service (REST/SOAP) Internet technology: HTTP(S), Web Service Security, PKI/X509 certificate, OAUTH2, Web Application Firewall, SAML/OIDC Cloud technology : Docker, K8S, AWS/GCP Database technology : SQL (MySQL/Postgre)/NoSQL(MongoDB/DynamoDB) Risk assessment: CVSSv2 scoring, Threat Modeling (OWASP, Microsoft SDL) Behavioral skills: Analytical, Autonomous, Creative Knowledge sharing Experience : 10+ years in software development or 2 years in similar position. At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now! Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
New Delhi, Delhi, India
Remote
Job Title Senior Backend Developer (Laravel & PHP) Gadget Guruz is India’s pioneering on-site electronics repair and e-waste management platform based in Delhi NCR. We bridge the gap between independent technicians and customers via our Digital products (Website and Apps) and proprietary hardware solutions. Our mission is to bring quality, transparency and accountability to a largely unorganized industry. Role Overview As a Senior Backend Developer, you will be a full-time, on-site member of our Delhi NCR team, owning the design, implementation and maintenance of our core server-side systems. You’ll work in a fast-paced, growth-stage startup environment—with blurred lines between Dev, Ops and Product—to build scalable, secure and high-performance APIs that power both web and mobile clients. Key Responsibilities Architecture & Development Lead end-to-end Laravel application design and coding, following clean-code and OOP principles Build and version RESTful and GraphQL APIs for web (AJAX/Blade/Bootstrap) and mobile (Flutter) clients Database & Performance Model MySQL schemas, optimize queries and manage migrations Implement caching (Redis/Memcached), queueing (RabbitMQ/SQS) and conduct load-testing Cloud Infrastructure Deploy, monitor and scale services on AWS (EC2, RDS, S3, Lambda, CloudWatch) Define Infrastructure-as-Code with Terraform or CloudFormation AI/ML & Real-Time Features Integrate ML models via inference endpoints in collaboration with Data Science (Preferred) Build real-time chat or notification services using WebSockets/Socket.IO Quality & Security Enforce web-security best practices (OWASP, input validation, encryption) Own CI/CD pipelines (GitHub Actions/Jenkins), unit/integration tests (PHPUnit, Mockery) Collaboration & Mentorship Partner with Frontend (Vue/React/Bootstrap) and Mobile (Flutter) teams on API contracts Mentor junior engineers and drive code reviews, documentation and knowledge sharing Must Have Qualifications 5+ years of backend experience with Laravel and PHP (strong OOP & design-pattern skills) Bachelor’s degree in Computer Science, Software Engineering or a related field Deep expertise in MySQL design, optimisation and migrations Proven track record building and securing RESTful or GraphQL APIs Hands-on AWS experience (EC2, RDS, S3, Lambda) and IaC (Terraform/CloudFormation) Solid understanding of web security best practices (OWASP Top 10, HTTPS, CSRF/XSS mitigation) Experience with version control workflows (Git) and setting up CI/CD Demonstrated ability to thrive in a growth-stage startup environment Preferred Skills Real-time frameworks: Node.js/Express.js with Socket.IO or Pusher Containerization (Docker) and orchestration (Kubernetes) NoSQL databases (MongoDB, DynamoDB) Serverless architectures (AWS Lambda, API Gateway) AI/ML model deployment and inference pipelines What we offer Competitive salary + ESOPs Flexible on-site/remote hybrid model (Delhi HQ) Equity participation and a seat at the table in shaping India’s repair-tech revolution How to Apply Fill this form https://forms.gle/vdvwpRd7SeUtuWAp9 , Share GitHub/portfolio links and a one-paragraph cover letter on a project where you built a scalable backend to hr@gadgetguruz.com with Subject: Senior Backend Developer – Your Name We look forward to building the future of electronics repair—and making India greener—together! Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TTEC Digital is seeking a Senior AWS Software Engineer to join our team. We are just as passionate about the relentless pursuit of Customer Success by providing ideal solutions to solving our client’s business problems by driving customer experience outcomes with our enhanced technical capabilities as you are. Whether you’re the Engineer, Architect, Project Manager, Practice Leader or Sales Executive we need your talent to help us in our exciting journey to success! What You’ll Be Doing As the backbone of our delivery teams, Senior AWS Developers build and deploy our designed solutions for our clients. You will work to refine the process of delivering projects by building internal tools, suggesting new methods of designing and building a system, and providing feedback to their team leads on issues they are experiencing. What You’ll Bring To Us Requirements analysis and design conversations when you have a new project starting Working on issues in Jira with your team - building systems, creating CloudFormation/Serverless templates to deploy resources, etc Learning best practices used in the NodeJS, JavaScript/TypeScript, and Python communities Working with your mentor/team lead to further knowledge of AWS Services, tools, or even new languages to address project needs You need to be ready to learn quickly! What Skills You’ll Need Advanced knowledge of AWS services and cloud architecture 7+ years of development experience with a focus on Node.JS , Java working with AWS services (SAM, CDK, Connect, Lambda, Kinesis, etc). Advanced understanding of the way the web works Functional understanding of agile methodologies such as Scrum Ability to accept constructive feedback Desire to provide assistance to other team members The ideal candidate seeks to understand before prescribing a solution A love for technology and the latest and greatest in development best practices, especially the latest services from AWS Willingness to stand up for your convictions and yet commit to the team’s decision Desired Skills Familiarity with the Serverless Framework ( serverless.com ) Python experience Experience with technologies in AWS Services (SAM, CDK, Connect, Lambda, Kinesis, S3, EC2, DynamoDB, CloudFormation, etc) Who We Are We are passionate about the customer experience. With a deep legacy of over 30 years in the contact center environment, we have the expertise to help you navigate the key technologies to deliver an exceptional customer experience, leveraging all things AWS. We specialize in the design and delivery of Amazon Connect, a cloud-based enterprise contact center solution used around the globe and are uniquely focused on helping businesses improve customer engagement, while maximizing the benefits of the cloud. Our expertise is focused on AI & natural language automation, chatbots, CTI/CRM, enterprise integration, user experience design, analytics and workforce optimization. Combined with best practices and a proven methodology in deploying Amazon Connect, we have the right mix of expertise and innovative solutions to make your vision a reality. Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities What you'll do Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Qualifications Your background 6+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Show more Show less
Posted 3 weeks ago
1.0 - 2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Why NxtWave As a Software Development Engineer at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Job Responsibilities Develop REST & GraphQL APIs required for the applications Designing the database schema for new features to develop highly scalable & optimal applications Work closely with the product team & translate the PRDs & User Stories to a right solution Writing highly quality code following the clean code guidelines, design principles & clean architecture with maximum test coverage Take ownership of features you are developing & drive towards completion Do peer code reviews & constantly improve code quality Skills Required 1-2 years of experience in backend application development Strong expertise in Python or Java, MySQL, REST API Design Good understanding of Frameworks like Django or Flask or Spring boot & ability to work with ORMs Expertise on indexes in MySQL and writing optimal queries Comfortable with Git Good problem solving skills Write unit and integration tests with high code coverage Have good understanding of NoSQL databases like DynamoDB, ElasticSearch (Good to Have) Having a good understanding of AWS services is beneficial. Qualities we'd love to find in you The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination for completion with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location Hyderabad Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Backend Engineer Location: Gurugram, Haryana, India Department: OMai Reports To: Product Owner - OMai FLSA Status: Full Time Pay Range: About Us: At Biogas Engineering, our mission is to create value for our clients by bringing to bear the skills and focus of our employees to exploit renewable energy opportunities and solve environmental challenges faced by our society today. We use our core engineering expertise to perform critical analysis of problems, utilizing value engineering, project controls and our knowledge of facility operations to find sustainable and environmentally friendly solutions for renewable energy projects. The core values of Biogas Engineering are: · Sustainable environmental stewardship. · The health and safety of our clients, staff, and the communities in which we practice. · Professionalism and technical excellence. · The highest standards of business conduct. · Practical, down-to-earth problem solving. · Innovation, creativity, and entrepreneurial spirit within our staff. Job Purpose: Backend Engineer with strong hands-on experience in database design, optimization, and management. The ideal candidate should be proficient in PostgreSQL, SQL, and backend development, ensuring efficient data handling and integration with applications. Duties & Responsibilities: · Develop and maintain backend services with a strong focus on database performance and scalability . · Design, optimize, and manage PostgreSQL databases for high-performance applications. · Write efficient SQL queries, stored procedures, and indexing strategies . · Ensure data integrity, security, and backup/recovery strategies for databases. · Work with AWS services such as RDS (PostgreSQL), S3, and Lambda for database operations. · Develop RESTful and GraphQL APIs for data access and integration. · Collaborate with frontend and DevOps teams for seamless application deployment. · Utilize Bitbucket for version control and code collaboration. Skills & Qualifications Required Skills and Experience: 10+ years of hands-on experience in backend development with a focus on databases . Expertise in PostgreSQL, SQL , and database performance optimization. Strong backend development experience using Node.js, Python, or Java . Experience in query optimization, indexing, and stored procedures . Familiarity with Bitbucket for version control and collaboration. Experience working with AWS database services (RDS, DynamoDB, etc.) . Knowledge of API development and backend frameworks Preferred Skills: Experience in NoSQL databases like MongoDB or DynamoDB. Knowledge of microservices architecture and event-driven systems. Familiarity with Docker and Kubernetes for deployment. Understanding of serverless frameworks in AWS. Minimum Requirements: • Bachelor’s degree in computer engineering • Self-motivated with exceptional time management • Ability to work independently and collaboratively • Excellent troubleshooting skills • Outstanding interpersonal and communication skills (written and verbal), with the ability to communicate with team members and clients at all levels of an organization • Proficient knowledge of Microsoft Office suite of products Direct Reports: None Background Check : Employment will be dependent on candidate successfully passing a background check. Benefits : Company offers competitive compensation based on qualifications, a comprehensive benefits package including medical, dental, vision, and 401k benefits. Paid holidays and Paid Time Off (PTO) are also offered. Biogas Engineering provides equal employment opportunities to all applicants and employees and strictly prohibits any type of harassment or discrimination in regard to race, religion, age, color, sex, disability status, national origin, genetics, sexual orientation, protected veteran status, gender expression, gender identity, or any other characteristic protected under federal, state, and/or local laws. Consistent with the Americans with Disabilities Act (ADA), it is the policy of [Company Name] to provide reasonable accommodation when requested by a qualified applicant or employee with a disability, unless such accommodation would cause an undue hardship. The policy regarding requests for reasonable accommodation applies to all aspects of employment, including the application process. If reasonable accommodation is needed, please contact [name and/or department, telephone, and e-mail address]. Your employment with Biogas Engineering is on an at-will basis, meaning either you or the Company can terminate the employment relationship, at any time, for any or no reason, and with or without cause or notice. As an at-will employee, your employment with Biogas Engineering is not guaranteed for any length of time. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job Role- AWS+Python(L3 Support) Experience-5+ years Location-Nagpur Job Details- 7+ years of microservices development experience in two of these: Python, Java, Scala 5+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 5+ years of experience with Big Data Technologies: Apache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational Databases: Postgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 3+ years of experience working with data consumption patterns 2+ years of experience with search and analytics platforms: OpenSearch or ElasticSearch 2+ years of experience in Cloud technologies: AWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Exposure to data-warehousing products: Snowflake or Redshift Exposure to Relation Data Modelling, Dimensional Data Modeling & NoSQL Data Modelling concepts Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2