Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
3.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary Artificial Intelligence & Engineering Join our AI & Engineering team in transforming technology platforms, driving innovation, and helping make a significant impact on our clients' success. You’ll work alongside talented professionals reimagining and re-engineering operations and processes that are critical to businesses. Your contributions can help clients improve financial performance, accelerate new digital ventures, and fuel growth through innovation. AI & Engineering leverages cutting-edge engineering capabilities to build, deploy, and operate integrated/verticalized sector solutions in software, data, AI, network, and hybrid cloud infrastructure. These solutions are powered by engineering for business advantage, transforming mission-critical operations. We enable clients to stay ahead with the latest advancements by transforming engineering teams and modernizing technology & data platforms. Our delivery models are tailored to meet each client's unique requirements. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. The work you will do includes: Understand business requirements and processes Develop software solutions using industry standard delivery methodologies like Agile, Waterfall across different architectural patterns Write clean, efficient, and well-documented code maintaining industry and client standards ensuring code quality and code coverage adherence as well as debugging and resolving any issues/defects Participate in delivery process like Agile development and actively contributing to sprint planning, daily stand-ups, and retrospectives Resolve issues or incidents reported by end users and escalate any quality issues or risks with team leads/scrum masters/project leaders Develop expertise in end-to-end construction cycle starting from Design (low level and high level), coding, unit testing, deployment and defect fixing along with coordinating with multiple stakeholders Create and maintain technical documentation, including design specifications, API documentation and usage guidelines Demonstrate problem-solving mindset and ability to analyze business requirements Qualifications Skills / Project Experience: Must Have: Excellent written and verbal communication skills 3 to 6 years of experience working on Microservices Architecture, Web services, API development, Enterprise integration layer Implement Microservices architecture, visualization, and development processes Strong technical skills in Java and Spring Boot framework Experience in Restful and SOAP Webservices Experience implementing services layer using more than one integration technologies Knowledge on API management, Service discovery, service orchestration, security as a service Implementation experience in XML, Version Control Systems like GIT hub & SVN and build tools Maven/Gradle/ANT Builds Experience in best practices such as OOPs Principles, Exception handling and usage of Generics and well-defined reusable easy to maintain code and tools like JUnit, Mockito, SOAP UI, Postman, Check style, SonarQube etc. Experience in SQL like MYSQL/PostgreSQL/Oracle and frameworks such as JPA/Hibernate Experience using logging and monitoring tools like Splunk, Dynatrace or similar Good to Have: Experience in working with Docker and Kubernetes is preferred Experience in NoSQL like MongoDB, DynamoDB etc. Experience in at least one cloud platform – AWS/Azure/GCP Experience of Build and Test Automation and Continuous Integration (CI) using Jenkins/Hudson tools Knowledge of Agile and Scrum Software Development Methodologies Experience with NoSQL and DevOps Knowledge on design patterns like circuit breaker pattern, proxy pattern, etc. Experience in using messaging broker tools like Apache Kafka, ActiveMQ, etc. Experience in deploying Microservices on cloud platforms Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with hands-on Microservices, Spring boot on cloud technologies Location: Bengaluru/Hyderabad/Pune/Mumbai The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300269
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms(EKS, OCP, OKE and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a high level of technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process ( Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development Participate in POC(Proof of Concept) technical evaluations for new technologies for use in the cloud What we’re looking for... You’ll Need To Have Bachelor’s degree or four or more years of experience. GitOpsCI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Address Jira tickets opened by platform customers Strong Expertise of SDLC and Agile Development Experience in Design, develop and implement scalable React/Node based applications (Full stack developer) Experience with development with HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python Database experience (RDBMS, NoSQL, etc.) Familiarity integrating with existing web application portals Strong backend development experience with languages to include Golang (preferred), Spring Boot and Python. Experience with GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Strong Terraform and/or Ansible and Bash scripting experience Effective code review, quality, performance tuning experience, test Driven Development Certified Kubernetes Application Developer (CKAD) Excellent cross collaboration and communication skills Even better if you have one or more of the following: Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Experience with monitoring tools like NewRelic (NRDOT), OTLP Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Development Experience with the Operator SDK Experience creating validating and/or mutating webhooks Familiarity with creating custom EnvoyFilters for Istio service mesh and cost optimization tools like Kubecost, CloudHealth to implement right sizing recommendations If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Job Our Team: The Web Application Platform team enables the "Factories of the Future" by developing best-in-class web apps and APIs that support the integration between the data platform and web applications. We are a team dedicated to fostering knowledge and expertise in well-crafted and sustainable software development. Main Responsibilities Develop and maintain the backend services with Node.js (Nest.js framework) and Vue.js frontend apps Work with AWS cloud services and Terraform/Terragrunt to deploy infrastructure Write clean, reusable, and scalable code and tests using TypeScript/JavaScript Perform code reviews and ensure code quality and adherence to coding and architectural standards Troubleshoot and debug software issues Continuously learn and keep up to date with the latest technologies and industry trends Suggest new innovative patterns to improve the softaware development process Able to work in a fast-paced and constantly evolving environment. Assist the team management in leading the developers to meet the timelines and aid in unblocking technical impediments Actively contribute to the Engineering community and define leading practices and frameworks Collaborate with software and solution architects to design and implement best-in-class software solutions Expertise About you We are seeking an experienced Full Stack Developer with a strong background in TypeScript and JavaScript, capable of developing and maintaining both backend (Node.js with Nest.js framework) and frontend (Vue.js or other modern SPA frameworks) components. The ideal candidate should have expertise in database management, API design (REST and GraphQL), and deployment using AWS cloud services and infrastructure-as-code tools like Terraform/Terragrunt. Soft Skills Being able to analyse problems, think critically, and develop creative solutions is crucial. Ideal candidate possesses a growth mindset, embracing change and demonstrating a willingness to learn and adapt. Excellent written, verbal, and interpersonal skills with ability to communicate ideas, concepts and solutions to peers and leaders Technical Skills Should be adept at writing clean, efficient, and maintainable code using TypeScript and JavaScript to develop both backend and frontend components. It is crucial to have a solid understanding of development concepts, such as working with databases, designing, and implementing APIs (including REST and GraphQL), and handling server-side logic. Ideal candidate should have practical experience working with AWS services, such as EC2, S3, Lambda, and DynamoDB, to deploy and manage cloud-based applications. Additionally, proficiency in infrastructure-as-code tools, is important for efficiently provisioning and managing infrastructure resources in an automated and scalable manner. Experience: 7+ years Education While not mandatory, relevant educational background or certifications related to software development would be a plus. Languages Fluency in written and spoken English. null Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch One day at Sanofi and check out our Diversity Equity and Inclusion initiatives at Sanofi.com
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms(EKS, OCP, OKE and GKE)at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a sound technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process ( Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development What we’re looking for... You’ll Need To Have Bachelors degree or two or more years of experience. Address Jira tickets opened by platform customers GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Expertise of SDLC and Agile Development Design, develop and implement scalable React/Node based applications (Full stack developer) Experience with development with HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python, RDBMS, NoSQL, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Familiarity integrating with existing web application portals and backend development experience with languages to include Golang (preferred), Spring Boot, and Python. Experience with GitLab, GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Experience with Terraform and/or Ansible Experience with Bash scripting experience Effective code review, quality, performance tuning experience, Test Driven Development. Certified Kubernetes Application Developer (CKAD) Excellent cross collaboration and communication skills Even better if you have one or more of the following: GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Networking of Microservices Solid understanding of Kubernetes networking and troubleshooting Experience with monitoring tools like NewRelic working experience with Kiali, Jaeger Lifecycle management and assisting app teams on how they could leverage these tools for their observability needs K8S SRE Tools for Troubleshooting Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Your benefits package will vary depending on the country in which you work. subject to business approval Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary Artificial Intelligence & Engineering Join our AI & Engineering team in transforming technology platforms, driving innovation, and helping make a significant impact on our clients' success. You’ll work alongside talented professionals reimagining and re-engineering operations and processes that are critical to businesses. Your contributions can help clients improve financial performance, accelerate new digital ventures, and fuel growth through innovation. AI & Engineering leverages cutting-edge engineering capabilities to build, deploy, and operate integrated/verticalized sector solutions in software, data, AI, network, and hybrid cloud infrastructure. These solutions are powered by engineering for business advantage, transforming mission-critical operations. We enable clients to stay ahead with the latest advancements by transforming engineering teams and modernizing technology & data platforms. Our delivery models are tailored to meet each client's unique requirements. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work product within due timelines in agile framework. On requirement basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. The work you will do includes: Understand business requirements and processes Develop software solutions using industry standard delivery methodologies like Agile, Waterfall across different architectural patterns Write clean, efficient, and well-documented code maintaining industry and client standards ensuring code quality and code coverage adherence as well as debugging and resolving any issues/defects Participate in delivery process like Agile development and actively contributing to sprint planning, daily stand-ups, and retrospectives Resolve issues or incidents reported by end users and escalate any quality issues or risks with team leads/scrum masters/project leaders Develop expertise in end-to-end construction cycle starting from Design (low level and high level), coding, unit testing, deployment and defect fixing along with coordinating with multiple stakeholders Create and maintain technical documentation, including design specifications, API documentation and usage guidelines Demonstrate problem-solving mindset and ability to analyze business requirements Qualifications Skills / Project Experience: Must Have: Excellent written and verbal communication skills 3 to 6 years of experience working on Microservices Architecture, Web services, API development, Enterprise integration layer Implement Microservices architecture, visualization, and development processes Strong technical skills in Java and Spring Boot framework Experience in Restful and SOAP Webservices Experience implementing services layer using more than one integration technologies Knowledge on API management, Service discovery, service orchestration, security as a service Implementation experience in XML, Version Control Systems like GIT hub & SVN and build tools Maven/Gradle/ANT Builds Experience in best practices such as OOPs Principles, Exception handling and usage of Generics and well-defined reusable easy to maintain code and tools like JUnit, Mockito, SOAP UI, Postman, Check style, SonarQube etc. Experience in SQL like MYSQL/PostgreSQL/Oracle and frameworks such as JPA/Hibernate Experience using logging and monitoring tools like Splunk, Dynatrace or similar Good to Have: Experience in working with Docker and Kubernetes is preferred Experience in NoSQL like MongoDB, DynamoDB etc. Experience in at least one cloud platform – AWS/Azure/GCP Experience of Build and Test Automation and Continuous Integration (CI) using Jenkins/Hudson tools Knowledge of Agile and Scrum Software Development Methodologies Experience with NoSQL and DevOps Knowledge on design patterns like circuit breaker pattern, proxy pattern, etc. Experience in using messaging broker tools like Apache Kafka, ActiveMQ, etc. Experience in deploying Microservices on cloud platforms Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with hands-on Microservices, Spring boot on cloud technologies Location: Bengaluru/Hyderabad/Pune/Mumbai The Team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300269
Posted 1 month ago
8.0 - 12.0 years
20 - 25 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
We are looking for an experienced Senior Full Stack Developer to lead and deliver critical components of a modern SaaS platform. This is a technically demanding role with a focus on system architecture, integration, and mentoring junior team members. The successful candidate will drive high-quality outcomes across the front-end and back-end, working closely with cross-functional teams. What You Will Do * Architect, develop, and maintain robust web applications * Design scalable, modular system components * Lead by example in code quality, review practices, and test coverage * Integrate third-party APIs and manage data flows * Troubleshoot and resolve complex technical issues * Write clean, testable, maintainable code * Produce clear, structured technical documentation * Collaborate in planning, delivery, and technical decision-making Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 month ago
2.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
🚀 We're Hiring: Technical Sales Executive – Craxinno Technologies Pvt. Ltd. Location: Remote / Jaipur (Hybrid option available) Experience: 1–2 Years Job Type: Full-time Salary: As per experience + attractive incentives About Us Craxinno Technologies Pvt. Ltd. is a full-service IT solutions provider with a 98% job success rate across global platforms. We specialise in delivering scalable web, mobile, and SaaS solutions using the latest technologies, including React, Node.js, Laravel, and more. With a team of over 30 tech professionals, we are growing and seeking a Technical Sales Executive to join our dynamic business development team. Role Overview We’re looking for a smart, self-driven Sales Executive with a technical background who can identify new business opportunities, communicate with potential clients, and understand our tech stack well enough to support bidding and presales conversations. Key Responsibilities Drive outbound/inbound sales through platforms like Upwork, LinkedIn, and email outreach. Understand client requirements and effectively communicate Craxinno’s capabilities and solutions. Prepare project proposals, pitch decks, and initial estimations in collaboration with technical leads. Maintain CRM records and consistently follow up on leads and pipeline. Help qualify and convert leads through strong product/tech knowledge and confidence in communication. Work closely with project managers and design/dev teams to ensure alignment during presales and handoff. Required Skills Technical Understanding of common web and app development stacks: Frontend: React.js, Redux, TypeScript, Next.js, Angular, HTML5, CSS, Bootstrap, Tailwind Backend: Node.js, Express.js, Python Databases: MongoDB, PostgreSQL, GraphQL, DynamoDB, MySQL Excellent communication and presentation skills in English Proven experience in client handling, lead generation, or B2B/B2C tech sales Basic understanding of bidding platforms like Upwork, Freelancer, or Clutch is a plus Strong organizational and follow-up skills Nice to Have Previous experience working in a web/app development agency Ability to read technical documentation and convert it into simplified client-friendly language Familiarity with tools like HubSpot, Trello, Figma, or Slack Why Join Craxinno? Work with a fast-growing tech company that values talent and transparency Exposure to international clients and scalable tech products Performance-based growth and commission opportunities Collaborative and open work culture 📩 Interested? Send your resume and portfolio (if any) to [nimish@craxinno.com] 📞 For more info, visit: www.craxinno.com
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks )Get to take ownership of the features you build and work closely with the product tea mWork in a great culture that continuously empowers you to grow in your caree rEnjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster )NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidl yBuild in a world-class developer environment by applying clean coding principles, code architecture, etc .Responsibilitie sLead design and delivery of complex end-to-end features across frontend, backend, and data layers .Make strategic architectural decisions on frameworks, datastores, and performance patterns .Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns .Build and maintain shared UI component libraries and backend service frameworks for team reuse .Identify and eliminate performance bottlenecks in both browser rendering and server throughput .Instrument services with metrics and logging, driving SLIs, SLAs, and observability .Define and enforce comprehensive testing strategies: unit, integration, and end-to-end .Own CI/CD pipelines, automating builds, deployments, and rollback procedures .Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices .Partner with Product, UX, and Ops to translate business objectives into technical roadmaps .Facilitate sprint planning, estimation, and retrospectives for predictable deliveries .Mentor and guide SDE-1s and interns; participate in hiring .Qualifications & Skill s3–5 years building production Full stack applications end-to-end with measurable impact .Proven leadership in Agile/Scrum environments with a passion for continuous learning .Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies .Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot) .Expert in designing RESTful and GraphQL APIs and scalable database schemas .Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis) .Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc .Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright) .Frontend profiling (Lighthouse) and backend tracing for performance tuning .Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes .Strong communicator able to convey technical trade-offs to non-technical stakeholders .Experience in reviewing pull requests and providing constructive feedback to the team .Qualities we'd love to find in you : The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality softwa reStrong collaboration abilities and a flexible & friendly approach to working with tea msStrong determination with a constant eye on solutio nsCreative ideas with problem solving mind-s etBe open to receiving objective criticism and improving upon itEagerness to learn and zeal to gr owStrong communication skills is a huge pl usWork Location : Hyderab ad About NxtW aveNxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational backgrou nd.NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capit al.As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellen ce.Some of its prestigious recognitions inclu de:Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globa lly‘Startup Spotlight Award of the Year’ by T-Hub in 2 023‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awa rds‘The Greatest Brand in Education’ in a research-based listing by URS Me diaNxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech educat ionNxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and mo re. Know more about NxtWa ve: https://www.ccb p.inRead more about us in the new s – Economic Times | CNBC | YourStory | VCCi rcle
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Lead design and delivery of complex end-to-end features across frontend, backend, and data layers. Make strategic architectural decisions on frameworks, datastores, and performance patterns. Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Build and maintain shared UI component libraries and backend service frameworks for team reuse. Identify and eliminate performance bottlenecks in both browser rendering and server throughput. Instrument services with metrics and logging, driving SLIs, SLAs, and observability. Define and enforce comprehensive testing strategies: unit, integration, and end-to-end. Own CI/CD pipelines, automating builds, deployments, and rollback procedures. Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Partner with Product, UX, and Ops to translate business objectives into technical roadmaps. Facilitate sprint planning, estimation, and retrospectives for predictable deliveries. Mentor and guide SDE-1s and interns; participate in hiring. Qualifications & Skills 3–5 years building production Full stack applications end-to-end with measurable impact. Proven leadership in Agile/Scrum environments with a passion for continuous learning. Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies. Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expert in designing RESTful and GraphQL APIs and scalable database schemas. Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis). Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc. Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright). Frontend profiling (Lighthouse) and backend tracing for performance tuning. Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes. Strong communicator able to convey technical trade-offs to non-technical stakeholders. Experience in reviewing pull requests and providing constructive feedback to the team. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle
Posted 1 month ago
0 years
0 Lacs
Krishnagiri, Tamil Nadu, India
On-site
Experience with cloud databases and data warehouses (AWS Aurora, RDS/PG, Redshift, DynamoDB, Neptune). Building and maintaining scalable real-time database systems using the AWS stack (Aurora, RDS/PG, Lambda) to enhance business decision-making capabilities. Provide valuable insights and contribute to the design, development, and architecture of data solutions. Experience utilizing various design and coding techniques to improve query performance. Expertise in performance optimization, capacity management, and workload management. Working knowledge of relational database internals (locking, consistency, serialization, recovery paths). Awareness of customer workloads and use cases, including performance, availability, and scalability. Monitor database health and promptly identify and resolve issues. Maintain comprehensive documentation for databases, business continuity plan, cost usage and processes. Proficient in using Terraform or Ansible for database provisioning and infrastructure management. Additional 'nice-to-have' expertise in Python, Databricks, Apache Airflow, Google Cloud Platform (GCP), and Microsoft Azure.
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Full Stack Developer 2 Responsibilities: Lead design and delivery of complex end-to-end features across frontend, backend, and data layers. Make strategic architectural decisions on frameworks, datastores, and performance patterns. Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Build and maintain shared UI component libraries and backend service frameworks for team reuse. Identify and eliminate performance bottlenecks in both browser rendering and server throughput. Instrument services with metrics and logging, driving SLIs, SLAs, and observability. Define and enforce comprehensive testing strategies: unit, integration, and end-to-end. Own CI/CD pipelines, automating builds, deployments, and rollback procedures. Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Partner with Product, UX, and Ops to translate business objectives into technical roadmaps. Facilitate sprint planning, estimation, and retrospectives for predictable deliveries. Mentor and guide SDE-1s and interns; participate in hiring. Qualifications & Skills: 3–5 years building production Full stack applications end-to-end with measurable impact. Proven leadership in Agile/Scrum environments with a passion for continuous learning. Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies. Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expert in designing RESTful and GraphQL APIs and scalable database schemas. Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis). Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, apigateway etc. Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright). Frontend profiling (Lighthouse) and backend tracing for performance tuning. Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes. Strong communicator able to convey technical trade-offs to non-technical stakeholders. Experience in reviewing pull requests and providing constructive feedback to the team. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad
Posted 1 month ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities: Job Title: Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience:2-4 years in Data Engineering Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - GCP: Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 2-4 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure, or GCP) is a plus. - Familiarity with Spark Streaming is a bonus. Mandatory skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Years of experience required: 2-4 years Education qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Strategy {+ 22 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Company They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About the Client Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title : Java Developer Key Skills : AWS EKS and/or Lambda, Java, Kafka, Kotlin etc. Job Locations : PAN India Experience: 6+ Years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contractual Notice Period : Immediate - 10 Days Job description: Java Developer Key Responsibilities: Develop and Maintain Domain APIs on AWS EKS and/or Lambda Create high and low level design and implement capabilities using Micro services and Domain Driven Design principles Troubleshoot technical issues with in depth knowledge of technology and functional aspects Understand and build to Nonfunctional requirements like authorization access performance etc Assist in tracking and showcasing performance metrics and help optimize performance Provide innovative solutions for API Versioning strategies Assist in creation on Automated Test Suites which are interoperable across APIs Provide technical leadership to a team of developers ensuring adherence to best practices security standards and scalability Closely work with BA PO SM and other stakeholders to understand the requirements and ensure successfully delivery of product feature on time Participate in defect triage and analysis Support Go Live activities Requires Skills Qualifications: 6+ years of experience in developing Domain APIs using any distributed programming languages like Java, Kotlin etc. Must have a good understanding of SOAP and REST based integration patterns Knowledge of JSON and XML data structures Knowledge of SQL and No SQL Databases like MongoDB DynamoDB S3 PostgreSQL etc. Knowledge of various messaging services like AWS SQS AWS SNS RabbitMQ Kafka etc. Knowledge of AWS and Lambda Terraform Experience creating logs alerts and dashboard for visualization and troubleshooting Excellent problem solving and troubleshooting skills Ability to lead technical teams and mentor junior developers Communication skills both verbal and written ability to interact with stakeholders Knowledge of API Management bast practices and experience with API Gateways Understanding of security standards including OAuth SSO and encryption for integration and API security Work closely with Business Stakeholders to understand their needs and requirements and translate them into technical solutions Participate in endofiteration demos to showcase the key deliverables to IT and business stakeholders. Tech Stack: AWS EKS and Lambda Programming Language Java Kotlin Framework Spring Boot DB MongoDB DynamoDB Redis Cache PostgreSQL S3 Messaging Interface AWS SQS AWS SNS Kafka RabbitMQ Terraform for infrastructure provisioning Knowledge of Open API Specs OAS and ACORD NGDS is an added advantage.
Posted 1 month ago
0.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0625-0930 Employment Type: Full Time Position Description: Key Responsibilities Backend Development (Mandatory) Develop and maintain server-side logic with several years of hands-on experience in both Python and C#, focusing on API development, microservices, and seamless integration with Azure components (e.g., Redis Cache, Cosmos DB, Azure OpenAI). Frontend Development (Mandatory) Design and implement user interfaces using React with TypeScript to ensure a responsive, user-friendly experience. End-to-End Azure Solution Development (Mandatory) Own the full lifecycle of application development, including deployment automation (GitHub Actions), testing frameworks, code reviews, and best practice implementation to ensure reliable, scalable solutions in Azure. Cross-Functional Collaboration Work closely with DevOps, AI Engineers, and other stakeholders to deliver cohesive, timely, and secure solutions within the Azure ecosystem. We expect you to have Education/Experience: Bachelor’s degree in computer science, engineering, or a related field (or equivalent work experience). Backend Expertise: 10-15 years of hands-on experience C# (Mandatory) Python (Mandatory) REST Applications Development: Proven experience building RESTful applications in Python and C# is required. Azure Proficiency: Extensive experience working with Azure services (API Management, Redis Cache, Cosmos DB, Azure Functions, Azure AI/OpenAI). Frontend Skills: Solid experience with React using TypeScrip. (Mandatory) CI/CD & DevOps: Proficient with GitHub, GitHub Actions, and general DevOps practices, CI/CD pipelines, and agile methodologies. Containerization: Proficiency with Docker for containerization. Best Practices & Design Patterns: Strong grasp of software development principles, best practices, and the ability to apply design patterns effectively. Collaboration & Communication: Excellent interpersonal skills to work effectively within cross-functional teams. Knowledge Graphs: Exposure to or experience with knowledge graph technology is a plus. Note: We are not seeking an ML Engineer or Data Scientist. We’re looking for a strong software developer who applies solid engineering practices to build robust, maintainable solutions Your future duties and responsibilities: Experience with Serverless Framework to build and deploy serverless applications on AWS. Experience with AWS Lambda, API Gateway, S3, DynamoDB etc. Experience with CI/CD pipelines and tools (Jenkins, Azure DevOps, etc.). Experience with scripting languages (Python, Bash). Experience with containerization technologies (Docker) Strong communication and collaboration skills. Ability to work independently and as part of a team. Experience with security tools Skills: C# Python Analytical Thinking What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 7+ years of engineering experience - 3+ years of engineering team management experience - 8+ years of leading the definition and development of multi tier web services experience - Knowledge of engineering practices and patterns for the full software/hardware/networks development life cycle, including coding standards, code reviews, source control management, build processes, testing, certification, and livesite operations - Experience partnering with product or program management teams - Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Prime Video is disrupting traditional media with an ever-increasing selection of movies, TV shows, Emmy Award winning original content, add-on subscriptions including HBO, and live events like Thursday Night Football. Our architecture operates at Amazon-scale and raises the bar for playback reliability, video start time, and image quality. Prime Video runs on thousands of device types in over 200 territories worldwide. The Prime Video Payments team serves as the voice of our customers, advocates on behalf of those customers, and delivers capabilities that allow us to acquire, engage, and retain more of them. Our mission is to ensure every internet-connected customer in the world can enjoy Prime Video. Our architecture serves billions of requests per day, with obsessively high reliability and low operational overhead. We leverage Amazon Web Services (AWS) technologies including EC2, S3, DynamoDB, Lambda, Kinesis, IoT, and CloudFront. As a Manager, Software Development on the Prime Video Payments team, you will oversee the design and implementation of significant technical projects by both achieving results through SDE's and QAE's. You will help influence the team’s technical and business strategy by making insightful contributions to priorities and approach, set the standard for engineering excellence, take the lead in identifying and solving ambiguous technical problems, architecture deficiencies, or areas where your team’s software bottlenecks the innovation of other teams and collaborate and influence other teams throughout the greater Prime Video organization. To achieve results through others, you will demonstrate technical influence over the different individual teams, either via a collaborative software effort or by increasing their productivity and effectiveness by driving software engineering best practices. You'll also lead design reviews for the org, actively participate in design reviews across Prime Video, provide insightful code reviews and actively mentor other engineers. Key Responsibilities: · Manage 1-2 teams of high caliber Software Engineers working on building, scaling world class, distributed systems · Recruit, hire, mentor, and coach SDEs and QAEs at different levels of experience · Manage and execute against project plans and delivery commitments within an Agile/Scrum environment · Contribute to and lead design, architecture, process and development discussions · Own all operational metrics and support for your teams' software · Drive improvements in software engineering practices across engineering teams Experience in communicating with users, other technical teams, and senior leadership to collect requirements, describe software product features, technical designs, and product strategy Experience in recruiting, hiring, mentoring/coaching and managing teams of Software Engineers to improve their skills, and make them more effective, product software engineers Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
- Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience writing complex SQL queries - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Amazon Business Customer Support (ABCS) is looking for a Business Intelligence Engineer to help build next generation metrics and drive business analytics that have measurable impact. The successful candidate will have a strong understanding of different businesses and customer profiles - the underlying analytics, and the ability to translate business requirements into analysis, collect and analyze data, and make recommendations back to the business. BIEs also continuously learn new systems, tools, and industry best practices to help design new studies and build new tools that help our team automate, and accelerate analytics. As a Business Intelligence Engineer, you will develop strategic reports, design UIs and drive projects to support ABCS decision making. This role is inherently cross-functional — you will work closely with finance teams, engineering, and leadership across Amazon Business Customer Service. A successful candidate will be a self-starter, comfortable with ambiguity, able to think big and be creative (while still paying careful attention to detail). You should be skilled in database design, be comfortable dealing with large and complex data sets, and have experience building self-service dashboards and using visualization tools especially Tableau. You should have strong analytical and communication skills. You will work with a team of analytics professionals who are passionate about using machine learning to build automated systems and solve problems that matter to our customers. Your work will directly impact our customers and operations. Members of this team will be challenged to innovate use the latest big data techniques. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to complex problems while working hard, having fun, and making history, this may be the opportunity for you. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business. Key job responsibilities - Scope, Design, and build database structure and schema. - Create data pipelines using ETL connections/ SQL queries. - Retrieve and analyze data using a broad set of Amazon's data technologies. - Pull data on an ad-hoc basis using SQL queries. - Design, build and maintain automated reporting and dashboards - Conduct deep dives to identify root causes of pain points and opportunities For improvement: - Become a subject matter expert in AB CS data, and support team members in Dive deep: - Work closely with CSBI teams to ensure ABCS uses Globally alleged standard Metrics and definition: - Collaborate with finance, business to gather data and metrics requirements. A day in the life We thrive on solving challenging problems to innovate for our customers. By pushing the boundaries of technology, we create unparalleled experiences that enable us to rapidly adapt in a dynamic environment. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
DESCRIPTION Amazon Business Customer Support (ABCS) is looking for a Business Intelligence Engineer to help build next generation metrics and drive business analytics that have measurable impact. The successful candidate will have a strong understanding of different businesses and customer profiles - the underlying analytics, and the ability to translate business requirements into analysis, collect and analyze data, and make recommendations back to the business. BIEs also continuously learn new systems, tools, and industry best practices to help design new studies and build new tools that help our team automate, and accelerate analytics. As a Business Intelligence Engineer, you will develop strategic reports, design UIs and drive projects to support ABCS decision making. This role is inherently cross-functional — you will work closely with finance teams, engineering, and leadership across Amazon Business Customer Service. A successful candidate will be a self-starter, comfortable with ambiguity, able to think big and be creative (while still paying careful attention to detail). You should be skilled in database design, be comfortable dealing with large and complex data sets, and have experience building self-service dashboards and using visualization tools especially Tableau. You should have strong analytical and communication skills. You will work with a team of analytics professionals who are passionate about using machine learning to build automated systems and solve problems that matter to our customers. Your work will directly impact our customers and operations. Members of this team will be challenged to innovate use the latest big data techniques. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to complex problems while working hard, having fun, and making history, this may be the opportunity for you. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business. Key job responsibilities Scope, Design, and build database structure and schema. Create data pipelines using ETL connections/ SQL queries. Retrieve and analyze data using a broad set of Amazon's data technologies. Pull data on an ad-hoc basis using SQL queries. Design, build and maintain automated reporting and dashboards Conduct deep dives to identify root causes of pain points and opportunities For improvement: Become a subject matter expert in AB CS data, and support team members in Dive deep: Work closely with CSBI teams to ensure ABCS uses Globally alleged standard Metrics and definition: Collaborate with finance, business to gather data and metrics requirements. A day in the life We thrive on solving challenging problems to innovate for our customers. By pushing the boundaries of technology, we create unparalleled experiences that enable us to rapidly adapt in a dynamic environment. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! BASIC QUALIFICATIONS Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience writing complex SQL queries Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
50.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
hackajob is collaborating with Verisk to connect them with exceptional tech professionals for this role. Job Description The ideal candidate will have a strong background in AWS services, Azure DevOps, and various monitoring tools. You will be responsible for managing and optimizing our cloud infrastructure, building and maintaining CI/CD pipelines, and ensuring the reliability and performance of our applications Responsibilities Key Responsibilities: AWS Services Management: Handle AWS services including Cognito, DynamoDB, API Gateway, Lambda, EC2, S3, and CloudWatch. Serverless Framework: Utilize the Serverless Framework to build and manage serverless infrastructure for APIs. Azure DevOps: Build and maintain CI/CD pipelines using Azure DevOps. Monitoring Tools: Implement and manage monitoring tools such as Splunk and CloudWatch to ensure system reliability and performance. Scripting: Develop and maintain shell and Python scripts for automation and system management. Linux Administration: Manage and optimize Linux-based systems. Version Control: Use Git for version control and collaboration. Qualifications Bachelor's degree in Computer Science, Information Technology, or related field. Proven experience as a DevOps Engineer or similar role. Strong knowledge of AWS services and Azure DevOps. Proficiency in shell scripting and Linux administration. Experience with monitoring tools like Splunk and CloudWatch. Familiarity with version control systems, particularly Git. Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Preferred Skills Experience with containerization technologies such as Docker and Kubernetes. Knowledge of infrastructure as code (IaC) tools like Terraform or CloudFormation. Understanding of security best practices in cloud environments. About Us For over 50 years, Verisk has been the leading data analytics and technology partner to the global insurance industry by delivering value to our clients through expertise and scale. We empower communities and businesses to make better decisions on risk, faster. At Verisk, you'll have the chance to use your voice and build a rewarding career that's as unique as you are, with work flexibility and the support, coaching, and training you need to succeed. For the eighth consecutive year, Verisk is proudly recognized as a Great Place to Work® for outstanding workplace culture in the US, fourth consecutive year in the UK, Spain, and India, and second consecutive year in Poland. We value learning, caring and results and make inclusivity and diversity a top priority. In addition to our Great Place to Work® Certification, we’ve been recognized by The Wall Street Journal as one of the Best-Managed Companies and by Forbes as a World’s Best Employer and Best Employer for Women, testaments to the value we place on workplace culture. We’re 7,000 people strong. We relentlessly and ethically pursue innovation. And we are looking for people like you to help us translate big data into big ideas. Join us and create an exceptional experience for yourself and a better tomorrow for future generations. Verisk Businesses Underwriting Solutions — provides underwriting and rating solutions for auto and property, general liability, and excess and surplus to assess and price risk with speed and precision Claims Solutions — supports end-to-end claims handling with analytic and automation tools that streamline workflow, improve claims management, and support better customer experiences Property Estimating Solutions — offers property estimation software and tools for professionals in estimating all phases of building and repair to make day-to-day workflows the most efficient Extreme Event Solutions — provides risk modeling solutions to help individuals, businesses, and society become more resilient to extreme events. Specialty Business Solutions — provides an integrated suite of software for full end-to-end management of insurance and reinsurance business, helping companies manage their businesses through efficiency, flexibility, and data governance Marketing Solutions — delivers data and insights to improve the reach, timing, relevance, and compliance of every consumer engagement Life Insurance Solutions - offers end-to-end, data insight-driven core capabilities for carriers, distribution, and direct customers across the entire policy lifecycle of life and annuities for both individual and group. Verisk Maplecroft — provides intelligence on sustainability, resilience, and ESG, helping people, business, and societies become stronger Verisk Analytics is an equal opportunity employer. All members of the Verisk Analytics family of companies are equal opportunity employers. We consider all qualified applicants for employment without regard to race, religion, color, national origin, citizenship, sex, gender identity and/or expression, sexual orientation, veteran's status, age or disability. Verisk’s minimum hiring age is 18 except in countries with a higher age limit subject to applicable law. Unsolicited resumes sent to Verisk, including unsolicited resumes sent to a Verisk business mailing address, fax machine or email address, or directly to Verisk employees, will be considered Verisk property. Verisk will NOT pay a fee for any placement resulting from the receipt of an unsolicited resume.
Posted 1 month ago
4.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Data Engineer We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities We Entrust You With Data Pipeline Development : Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant Work Experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What We Value In Our People You take the shot : You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow : by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do (ref:hirist.tech)
Posted 1 month ago
10.0 years
0 Lacs
India
On-site
About us Founded in 2008, CitNOW is an innovative, enterprise-level software product suite that allows automotive dealerships globally to sell more vehicles and parts more profitably. CitNOW’s app-based platform provides a secure, brand-compliant solution – for dealers to build trust, transparency and long-lasting relationships. CitNOW Group was formed in 2021 to unite a portfolio of 12 global software companies leveraging innovation to aid retailers and manufacturers in delivering an outstanding customer experience. We have over 300 employees worldwide who all contribute to our vision to provide market-leading automotive solutions to drive efficiencies, seamlessly transforming every customer moment. The CitNOW Group is no ordinary technology company, we live a series of One Team values and this guiding principle forms the foundation of CitNOW Group’s award winning, collaborative and inclusive culture. Recognised recently within the Top 25 Best Mid Sized Companies to work for within the UK, we pride ourselves on being a great place to work. About the role We are seeking a highly experienced Senior Database Administrator to own the performance, availability and security of our growing fleet of databases across MSSQL, PostgreSQL and AWS-managed services. This individual will play a key role in designing scalable data architectures, ensuring high availability and supporting development teams with optimised queries and resilient data pipelines. Key responsibilities: Database Administration & Operations Maintain, monitor, and tune production and staging MSSQL and PostgreSQL databases Manage high availability, backups, restores, replication, and disaster recovery strategies Ensure uptime and performance SLAs are met across cloud and hybrid environments AWS Cloud Expertise Administer RDS, Aurora, and EC2-hosted database instances. Monitor database performance using CloudWatch, Performance Insights, and third-party tools Data Architecture & Design Work with engineering teams to model new schemas, optimize indexes, and review queries Implement and enforce best practices in database normalization, partitioning, and data lifecycle management Security & Compliance Ensure data encryption, access controls, and audit logging are in place and compliant with company policies Support GDPR/SOC 2/ISO 27001 initiatives with appropriate database controls and evidence collection Collaboration & Mentoring Provide database guidance to developers and DevOps teams during code reviews and deployments Share knowledge, mentor junior DBAs, and improve documentation and internal tooling Required skills & experience 10+ years of DBA experience, including at least: 5+ years with MSSQL Server (SQL Agent, SSIS, performance tuning) 5+ years with PostgreSQL (query optimization, extensions, logical replication) Strong hands-on experience with AWS database services: RDS, Aurora, S3, and IAM integration Solid understanding of SQL query optimization, execution plans, and troubleshooting slow queries Proven track record of managing production-grade environments with strict uptime SLAs Familiarity with infrastructure-as-code and automation using tools like Terraform or Ansible Nice-to-have qualifications: Experience with NoSQL databases (e.g., DynamoDB, Redis, Mongo) Exposure to CI/CD pipelines with database change management tools (Flyway, Liquibase) Knowledge of containerized database deployments (e.g., PostgreSQL on Kubernetes) Certification in AWS (e.g., AWS Certified Database – Specialty) In addition to a competitive salary, our benefits package is second to none. Employee wellbeing is at the heart of our people strategy, with a number of innovative wellness initiatives such as flexi-time, where employees can vary their start and finish times within our core business hours and/or extend their lunch break by up to 2 hours per day. Employees also benefit from an additional two half days paid leave per year to focus on their personal wellbeing. We recognise the development of our people is vital to the ongoing success of the business and proudly promote a culture of continuous learning and improvement, along with opportunities to develop and progress a successful career with us. The CitNOW Group is an equal opportunities employer that celebrates diversity across our international teams. We are passionate about creating an inclusive workplace where everyone’s individuality is valued. View our candidate privacy policy here - CitNOW-Group-Candidate-Privacy-Policy.pdf (citnowgroup.com)
Posted 1 month ago
3.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team's Impact FactSet is seeking an Experienced software development engineering with proven proficiency in deployment of software adhering to best practices and with fluency in the development environment and with related tools, code libraries and systems. Responsible for the entire development process and collaborates to create a theoretical design. Demonstrated ability to critique code and production for improvement, as well as to receive and apply feedback effectively. Proven ability to maintain expected levels of productivity and increasingly becoming independent as a software developer, requiring less direct engagement and oversight on a day to day basis from one’s manager. Focus is on developing applications, testing & maintaining software, and the implementation details of development ; increasing volume of work accomplished (with consistent quality, stability and adherence to best practices), along with gaining a mastery of the products to which one is contributing and beginning to participate in forward design discussions for how to improve based on one’s observations of the code, systems and production involved. Software Developers provide project leadership and technical guidance along every stage of the software development life cycle. What You'll Do Work on the Data Lake platform handling millions of documents annually, built on No SQL Architecture. Focus on developing new features while supporting and maintaining existing systems, ensuring the platform's continuous improvement. Develop innovative solutions for feature additions and bug fixes, optimizing existing functionality as needed to enhance system efficiency and performance. Engage with Python, Frontend and C#.NET repositories to support ongoing development and maintenance, ensuring robust integration and functionality across the application stack. Participate in weekly On Call support to address urgent queries and issues in common communication channels, ensuring operational reliability and user satisfaction. Create comprehensive design documents for major architectural changes and facilitate peer reviews to ensure quality and alignment with best practices. Utilize object-oriented programming principles to develop low-level designs that effectively support high-level architectural frameworks, contributing to scalable solutions. Collaborate with product managers and key stakeholders to thoroughly understand requirements and propose strategic solutions, leveraging cross-functional insights. Actively participate in technical discussions with principal engineers and architects to support proposed design solutions, fostering a collaborative engineering environment. Accurately estimate key development tasks and share insights with architects, engineering directors, to align on priorities and resource allocation. Operate within an agile framework, collaborating with engineers and product developers using tools like Jira and Confluence. Engage in test-driven development and elevate team practices through coaching and reviews. Create and review documentation and test plans to ensure thorough validation of new features and system modifications. Work effectively as part of a geographically diverse team, coordinating with other departments and offices for seamless project progression. These responsibilities aim to highlight the complexity of managing a platform that ingests millions of documents, underscoring the importance of innovative solutions, technical proficiency, and collaborative efforts to ensure the Data Lake platform's success. What We're Looking For Bachelor’s or master’s degree in computer science, Engineering, or a related field is required. 3-6 years of experience in software development, with a focus on systems handling large-scale data operations. In-depth understanding of data structures and algorithms to optimize software performance and efficiency. Proficiency in object-oriented design principles is essential. Strong skills in Python, AWS, Frontend and C#.NET to comprehend and contribute to existing applications. Experience with non-relational databases, specifically DynamoDB, MongoDB and Elasticsearch, for optimal query development and troubleshooting. Experience with frontend technologies like Angular, React or Vue.js to support development of key interfaces Software Development:Familiarity with GitHub-based development processes, facilitating seamless collaboration and version control. Experience in building and deploying production-level services, demonstrating ability to deliver reliable and efficient solutions. API and System Integration: Proven experience working with APIs, ensuring robust connectivity and integration across the system. AWS Expertise: Working experience with AWS services such as Lambda, EC2, S3, and AWS Glue is beneficial for cloud-based operations and deployments. Problem-Solving and Analysis: Strong analytical and problem-solving skills are critical for developing innovative solutions and optimizing existing platform components. Communication and Collaboration: Excellent collaborative and communication skills, enabling effective interaction with geographically diverse teams and key stakeholders. On Call and Operational Support: Capability to address system queries and provide weekly On Call support, ensuring system reliability and user satisfaction. Organizational Skills: Ability to prioritize and manage work effectively in a fast-paced environment, demonstrating self-direction and resourcefulness. Required Skills Python Proficiency: Experience with Python and relevant libraries like Pandas and NumPy is beneficial for data manipulation and analysis tasks. Jupyter Notebooks: Familiarity with Jupyter Notebooks is a plus for supporting data visualization and interactive analysis. Agile Methodologies: Understanding of Agile software development is advantageous, with experience in Scrum as a preferred approach for iterative project management. Linux/Unix Experience: Exposure to Linux/Unix environments is desirable, enhancing versatility in system operations and development. What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn More About Our Benefits Here. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 1 month ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title - DevOps Engineer Responsibilities Designing and building infrastructure to support our AWS services and infrastructure. Creating and utilizing tools to monitor our applications and services in the cloud, including system health indicators, trend identification, and anomaly detection. Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud. Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance. Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime. Qualifications Bachelor’s degree in CS or ECE. 3+ years of experience in a DevOps Engineer role. Strong experience in public cloud platforms (AWS, Azure, GCP), provisioning and managing core services (S3, EC2, RDS, EKS, ECR, EFS, SSM, IAM, etc.), with a focus on cost governance and budget optimization Proven skills in containerization and orchestration using Docker, Kubernetes (EKS/AKS/GKE), and Helm Familiarity with monitoring and observability tools such as SigNoz, OpenTelemetry, Prometheus, and Grafana Adept at designing and maintaining CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, Bitbucket pipelines, Nexus/Artifactory, and SonarQube to accelerate and secure releases Proficient in infrastructure-as-code and GitOps provisioning with technologies like Terraform, OpenTofu, Crossplane, AWS CloudFormation, Pulumi, Ansible, and ArgoCD Experience with cloud storage solutions and databases: S3, Glacier, PostgreSQL, MySQL, DynamoDB, Snowflake, Redshift Strong communication skills, translating complex technical and analytical content into clear, actionable insights for stakeholders Preferred Qualifications Experience with advanced IaC and GitOps frameworks: OpenTofu, Crossplane, Pulumi, Ansible, and ArgoCD Exposure to serverless and event-driven workflows (AWS Lambda, Step Functions) Experience operationalizing AI/ML workloads and intelligent agents (AWS SageMaker, Amazon Bedrock, canary/blue-green deployments, drift detection) Background in cost governance and budget management for cloud infrastructure Familiarity with Linux system administration at scale
Posted 1 month ago
4.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description – Senior Data Engineer We at Pine Labs are looking for those who share our core belief - “Every Day is Game day”. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Senior Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities We Entrust You With Data Pipeline Development: Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant Work Experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What We Value In Our People You take the shot: You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow: by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France