Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You are invited to join Cintal Technologies as a Cloud Data Engineer in Chennai. As a Senior Full Stack Developer, you should have a strong proficiency in React, AWS, and REST APIs, backed by over 8 years of web development experience. Your role will involve creating customer-facing applications, integrating cloud-based solutions, and ensuring the scalability, security, and performance of systems. Responsibilities: - Design, develop, and maintain customer-facing web applications using React, HTML5, CSS3, and JavaScript. - Implement responsive development to ensure applications are compatible across various devices and screen sizes. - Develop and integrate REST APIs using Python or Java, adhering to API security practices such as OAuth 2.0 and JWT. - Utilize AWS services like Lambda, EC2, S3, ECS, and CloudFormation for cloud-based development and deployment. - Collaborate with UI/UX teams to implement designs through tools like Figma and Sketch. - Integrate cloud database solutions (PostgreSQL, MySQL) into applications for efficient data management. - Ensure quality through unit testing frameworks like Jest, Mocha, Pytest, and unit testing. - Establish and manage CI/CD pipelines for streamlined code deployment. - Work on API tools such as Swagger, Postman, and Apigee to optimize and secure API integrations. - Maintain code versioning and collaborate effectively using Git repositories. - Implement security best practices including data encryption, API rate limiting, and input validation. Required Skills & Experience: - A bachelor's degree in computer science or a related field (required); a master's degree is preferred. - Over 8 years of web development experience using HTML5, CSS3, JavaScript, and Web Components. - Extensive front-end development experience, preferably with React JS. - Proficiency in designing, building, and integrating REST APIs. - In-depth experience with AWS services like Lambda, EC2, S3, CloudFormation, etc. - Familiarity with UI/UX principles and design tools such as Figma or Sketch. - Strong knowledge of API security, including OAuth 2.0, JWT, and API keys. - Experience with cloud database solutions like PostgreSQL and MySQL. - Knowledge of container platforms and orchestration (Docker, Kubernetes). - Familiarity with unit testing frameworks (Jest, Mocha, Pytest) and CI/CD pipelines. Preferred Skills: - Experience with API tools like Swagger, Postman, and Apigee. - Understanding of authentication protocols such as SSO, cookies, and session management. - Knowledge of design patterns and best practices in development. If you are ready to take on this challenging role at Cintal Technologies, please apply with your updated resume.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Cloud Data Engineer (AWS) at Barclays, where you will be responsible for supporting the successful delivery of location strategy projects to plan, budget, agreed quality, and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. To be successful as a Cloud Data Engineer (AWS) you should have experience with AWS cloud services such as S3, Glue, Athena, Lake Formation, CloudFormation, etc. You should also have a strong SQL knowledge, proficiency in PySpark, a very good understanding of writing and debugging code, and exhibit quick learning abilities. Strong analytical and problem-solving skills are essential, along with excellent written and verbal communication skills. Some other highly valued skills may include good knowledge of Python, understanding of SCM tools like GIT, previous working experience within the banking or financial services domain, and experience with Databricks, Snowflake, Starburst, Iceberg. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintain data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implement data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Develop processing and analysis algorithms fit for the intended data complexity and volumes. - Collaborate with data scientists to build and deploy machine learning models. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - Demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Develop technical expertise in the work area, acting as an advisor where appropriate. - Have an impact on the work of related teams within the area and partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities and escalate breaches of policies/procedure appropriately. - Advise and influence decision-making within your area of expertise and manage risk and strengthen controls in relation to the work you own or contribute to. All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship - our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset - to Empower, Challenge, and Drive - the operating manual for how we behave.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, implementing, and maintaining scalable event-streaming architectures that support real-time data. Your duties will include designing, building, and managing Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud). You will also be involved in developing and maintaining Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitoring and ensuring the reliability, scalability, and security of Kafka infrastructure will be crucial aspects of your role. Collaboration with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, EC2, Redshift) is essential. Additionally, you will implement and manage Kafka Connect, Kafka Streams, and ksqlDB where applicable. Optimizing Kafka performance, troubleshooting issues, and managing incidents will also be part of your responsibilities. To be successful in this role, you should have at least 3-5 years of experience working with Apache Kafka and Confluent Kafka. Strong knowledge of Kafka internals such as brokers, zookeepers, partitions, replication, and offsets is required. Experience with Kafka Connect, Schema Registry, REST Proxy, and Kafka security is also important. Hands-on experience with AWS services like EC2, IAM, CloudWatch, S3, Lambda, VPC, and Load balancers is necessary. Proficiency in scripting and automation using tools like Terraform, Ansible, or similar is preferred. Familiarity with DevOps practices and tools such as CI/CD pipelines, monitoring tools like Prometheus/Grafana, Splunk, Datadog, etc., is beneficial. Experience with containerization using Docker and Kubernetes is an advantage. Having a Confluent Certified Developer or Administrator certification, AWS Certified, experience with CICD tools like AWS Code Pipeline, Harness, and knowledge of containers (Docker, Kubernetes) will be considered as additional assets for this role.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
The AVP, Principal Product Engineer at Synchrony plays a pivotal role in modernizing workloads by leading vendor refactoring efforts, executing break-fix tasks, and devising user enablement strategies. This role requires a profound understanding of AWS analytics services such as EMR Studio, S3, Redshift, Glue, and Tableau, coupled with robust skills in user engagement, training development, and change management. Collaboration with vendors, business users, and cloud engineering teams is essential to refactor legacy code, ensure seamless execution of fixes, and create comprehensive training materials and user job aids. Additionally, as the Principal Product Engineer, overseeing user testing, validation, and sign-offs is crucial for a smooth transition to modern cloud-based solutions, enhancing adoption, and minimizing disruptions. This role presents an exciting opportunity to lead cloud migration initiatives, bolster analytics capabilities, and drive user transformation efforts within an innovative cloud environment. The incumbent will be accountable for the technical success of the project, fostering a collaborative, efficient, and growth-oriented atmosphere within the development team. Key Responsibilities: - Lead and mentor a team of data/analytics/cloud engineers, ensuring adherence to best practices in data development, testing, and deployment. - Conduct thorough data analysis to reveal insights, trends, and anomalies that support business decisions. - Collaborate with cross-functional teams including BI, data science, and business stakeholders to comprehend data needs and translate them into technical solutions. - Guide the team in architecting solutions involving AWS cloud components like EMR, S3, Athena, Redshift, Sagemaker, and SAS Viya. - Support data lineage, cataloging, and metadata management efforts using tools on AWS. - Keep abreast of emerging technologies and recommend enhancements to the data platform architecture. Qualifications/Requirements: - Minimum 6+ years of expertise in Data warehousing and Enterprise Data Lake Architectures; alternatively, 8+ years of relevant experience in the absence of a degree. - Proficiency in crafting complex and optimized SQL queries for large-scale data analysis and transformation. - 2+ years of experience in SQL, Python, PySpark, AWS EMR, S3, and Athena. - Ability to lead and mentor a technical team, conduct code reviews, and enforce engineering best practices. - Familiarity with metadata management tools and cloud-native data engineering practices. Desired Characteristics: - Experience with AWS cloud services. - Certifications in AWS or any other cloud platform. - Proficiency in Agile project management methods and practices. - Capability to perceive the broader context beyond day-to-day coding tasks. - Excellent verbal, written, communication, and organizational skills. - Ability to empathize with team members" emotions and challenges, leading with compassion and support. - Delegation skills to effectively assign tasks and empower team members to solve problems autonomously. - Ability to innovate and implement new technologies, tools, or methodologies to enhance the development process. - Working knowledge of Tableau, SAS Viya, Stonebranch, and Hive is advantageous. Eligibility Criteria: - Minimum 6+ years of expertise in Data warehousing and Enterprise Data Lake Architectures; alternatively, 8+ years of relevant experience in the absence of a degree. Work Timings: 3 PM to 12 AM IST Note: The work timings may vary based on business needs and require flexibility between 06:00 AM Eastern Time to 11:30 AM Eastern Time for meetings with India and US teams. The remaining hours offer flexibility for the employee to choose. For Internal Applicants: - Understand the mandatory skills required for the role before applying. - Notify your manager and HRM before applying for any role on Workday. - Ensure that your professional profile is updated with relevant details and upload an updated resume. - No ongoing corrective action plan is allowed. - Only L9+ Employees who have completed 18 months in the organization and 12 months in the current role and level are eligible to apply. Grade/Level: 11 Job Family Group: Information Technology,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
You are a skilled AWS Engineer responsible for supporting the buildout and scaling of CodePulse, an internal engineering metrics platform. In this contract position for 36 months, your primary focus will be on implementing AWS infrastructure to facilitate high-throughput metric ingestion and analytics. You should have a proven track record of building scalable, event-driven systems using core AWS services such as ECS Fargate, SQS, API Gateway, RDS, S3, and Elasticache. Previous experience working on similar data processing or pipeline-based platforms will be advantageous. Your key responsibilities will include designing, implementing, and maintaining secure, auto-scaling AWS infrastructure for a container-based microservice application. You will be deploying ECS Fargate workloads to process messages from SQS queues and store results in RDS and S3. Additionally, setting up and optimizing CloudWatch alarms, logs, and metrics for system observability and alerting will be essential. Configuring and fine-tuning infrastructure components like SQS, SNS, API Gateway, RDS (PostgreSQL), ElastiCache (Redis), and S3 will also be part of your role. You will support integration with GitHub and Jira by securely handling API credentials, tokens, and webhook flows. Furthermore, writing and managing infrastructure-as-code using Terraform or AWS CDK, along with collaborating with internal engineers to troubleshoot issues, optimize performance, and manage deployment workflows will be crucial tasks. For this position, you should have at least 4-6 years of hands-on experience working as an AWS DevOps or Cloud Engineer. Your expertise should include deploying and scaling services using ECS Fargate, SQS, API Gateway, RDS (Postgres), and S3. Familiarity with Redis caching using Elasticache and experience in tuning cache strategies will be beneficial. A strong command of CloudWatch, including logs, alarms, and dashboard setup, is required. Proficiency in Terraform or AWS CDK for infrastructure automation, understanding of VPCs, IAM roles and policies, TLS, and secure communication patterns are key qualifications. Your demonstrated experience in building or supporting event-driven microservices in a production setting, ability to work independently in a remote, distributed team, and clear communication skills are essential. Preferred qualifications for this role include experience in building internal tools or platforms with metric processing, workflow orchestration, or CI/CD integration. Familiarity with GitHub Actions, Docker, and container image deployment via ECR, optimizing AWS infrastructure for cost efficiency, and auto-scaling under burst loads will be advantageous. Prior experience integrating with third-party APIs like GitHub, Jira, or ServiceNow would be a plus.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As a Cloud Computing Trainer, you will be responsible for delivering training on AWS and Azure technologies, focusing on essential services such as IAM, VPC, EC2, S3, and CloudTrail. Proficiency in Terraform/CloudFormation basics is required for this role. Your expertise should cover areas related to security, backups, and disaster recovery. To excel in this position, you should have a minimum of 3 years of teaching experience or 5+ years in the industry. Your skillset should demonstrate deep hands-on knowledge, coupled with the ability to develop comprehensive curriculums that cater to diverse learning needs. Effective communication is key in this role, as you will be required to conduct training sessions both online and offline, as well as create engaging demo reels. A sales-ready mindset is essential for contributing to marketing efforts, conducting webinars, and enhancing the brand image. Moreover, you should be willing to take partial ownership or work on a revenue-sharing model concerning the courses you deliver. This position is full-time, and proficiency in English is preferred. The work location is primarily in person, ensuring a hands-on and impactful learning experience.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Are you an experienced platform engineer ready to lead and build the next generation of cloud-native SaaS products Our client is seeking a Lead Platform Engineer to oversee the cloud platform architecture and implementation. This pivotal role requires a seasoned technologist with expertise in cloud-native development, particularly on AWS, and a dedication to constructing scalable, secure, and event-driven SaaS platforms. We are in search of an individual who can architect, construct, and lead the development of scalable, event-driven, and serverless applications on AWS. Key Responsibilities - Architect and build scalable, secure, and efficient platform modules aligned with the company's vision and business objectives. - Lead the end-to-end implementation of platform solutions throughout development, testing, and deployment phases. - Mentor and guide platform engineers, conduct code reviews, and foster skill development. - Collaborate with cross-functional teams, including product, UI/UX, and DevOps, to deliver seamless platform capabilities. - Drive the adoption of Python, serverless architecture, and event-driven systems. - Collaborate with cross-functional teams and influence platform strategy. - Stay abreast of cloud technology advancements to ensure our platform remains future-ready and innovative. - Contribute to establishing the technical strategy and vision in close collaboration with senior engineering leadership. Desired Skills & Experience Must Have - Proficiency in Python, emphasizing the development of production-grade, scalable code. - Hands-on experience with serverless architecture, microservices, and event-driven systems. - Demonstrated success in designing and deploying SaaS platforms on AWS. - In-depth knowledge of AWS services (Lambda, DynamoDB, API Gateway, RDS, S3, etc.) and architectural best practices. - Demonstrated leadership in technology design, team mentorship, and project execution. - Excellent communication, collaboration, and problem-solving abilities. Nice to Have (Will be an added Advantage) - AWS certifications (Architect/Developer). - Experience with Boto3 SDK. - Familiarity with Azure/GCP. - Knowledge of Docker/Kubernetes. If you are passionate about building scalable platforms and leading teams that shape the future of cloud technology, we would love to connect with you!,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced software developer with expertise in Node.js and Angular, responsible for driving the development of enterprise-grade, scalable applications. You possess strong hands-on experience with cloud-native deployments and excel in building high-performance, low-latency REST APIs. Your main responsibilities include leading the design and architecture of scalable, enterprise-grade, low-latency applications using Node.js, Angular, and cloud technologies. You will develop and optimize scalable, high-performance REST APIs, integrate databases (SQL and NoSQL) with Node.js applications, and design middleware components for cross-cutting concerns. In addition, you will utilize AWS services for cloud infrastructure, engage in full-stack development across front-end (Angular) and back-end (Node.js), ensure code quality, identify and address performance bottlenecks, implement security best practices, conduct various tests for application reliability, and collaborate closely with cross-functional teams for seamless product delivery. To excel in this role, you must have a strong proficiency in JavaScript and TypeScript, expertise in Node.js for back-end and Angular for front-end development, knowledge of microservices and distributed systems architecture, experience in designing scalable and high-availability systems, familiarity with SQL and NoSQL databases, and hands-on experience with Kubernetes, AWS, and CI/CD pipelines. You should also be proficient in monitoring and optimizing application performance, using version control tools like Git, and possess strong analytical, problem-solving, team leadership, and communication skills. We are particularly interested in candidates with full life cycle experience in building at least one enterprise-grade product and implementing scalable deployments.,
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
As a QA Engineer at Capco, a Wipro company, you will play a vital role in ensuring the quality and reliability of ETL pipelines and data integration workflows. With a focus on automation and industry-standard tools, you will be responsible for designing and implementing automated test strategies to validate data ingestion, transformation, and delivery processes. Your expertise will be crucial in performing regression, performance, and functional testing of data-centric applications and APIs. With 6+ years of experience in QA Engineering, you will leverage your hands-on experience with Apache Kafka and proficiency in validating data pipelines on AWS (S3, Lambda, Glue, Redshift, or EMR). Your strong database testing skills with DB2 and SQL-based validation across large datasets will be essential in ensuring data correctness between source systems and downstream targets. Collaboration is key in this role, as you will work closely with ETL developers, data engineers, and business analysts to define test coverage and quality metrics. Your ability to analyze test results, debug issues, and proactively identify data anomalies will contribute to the overall success of the projects. Additionally, your experience with automation tools/frameworks like Pytest, JUnit, TestNG, Selenium, or custom scripting in Python will be highly beneficial. At Capco, we embrace diversity and inclusion, believing that it gives us a competitive advantage. With a tolerant, open culture that values creativity and inclusivity, you will have the opportunity to grow personally and professionally. There is no forced hierarchy at Capco, allowing everyone to take control of their career advancement. If you have a passion for innovative thinking, delivery excellence, and thought leadership, and are looking to make an impact in transforming the energy and financial services industries, then this role at Capco is the perfect fit for you. Join us in delivering disruptive work that is shaping the future of banking, financial services, and energy sectors.,
Posted 1 month ago
7.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
You should have 8-15 years of experience in software quality assurance with a Bachelor's degree in computer science engineering or a related field. Your mandate skills should include expertise in UI Automation using Selenium with Python, Pytest Framework, AWS Services (EC2, S3, Lambda, etc.), and CICD. You must possess a strong knowledge of scripting in Python and be capable of understanding functional/technical specifications and analyzing data. Your responsibilities will involve developing Web automation in Selenium or Cucumber framework, RestAPI automation framework, and writing test plans and test strategy documents based on product requirements. As a QA professional, you should have an excellent QA aptitude, the ability to drive process improvements, and work both independently and as part of a team. Immediate joiners with a notice period of up to 15 days are preferred for this contract role based in Bengaluru. If you meet these qualifications and are interested in this opportunity, please share your CV at ashok@brillius.com.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a DevOps Engineer, you will play a crucial role in building and maintaining CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices. You will be responsible for managing Kubernetes infrastructure (AWS EKS), Helm charts, and service mesh configurations (ISTIO). Your expertise will be utilized in utilizing tools like kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Your main focus will include evaluating the security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software. You will support development and QA teams with code merge, build, install, and deployment environments. Additionally, you will ensure the continuous improvement of the software automation pipeline to enhance build and integration efficiency. Monitoring and maintaining the health of software repositories and build tools will also be part of your responsibilities. You will be required to verify final software release configurations, ensuring integrity against specifications, architecture, and documentation. Your role will involve performing fulfillment and release activities to ensure timely and reliable deployments. To be successful in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You should have 8-12 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services (EKS, IAM, CloudWatch, S3, Secrets Manager), including networking and security components, is essential. Strong experience with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize is required. You should have expertise in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools. Hands-on experience with infrastructure provisioning tools such as Docker and CloudFormation is preferred. Familiarity with CI/CD pipeline tools and build systems including Jenkins and Maven is a plus. Experience administering software repositories such as Git or Bitbucket is beneficial. Proficiency in scripting/programming languages such as Ruby, Groovy, and Java is desired. You should have a proven ability to analyze and resolve issues related to performance, scalability, and reliability. A solid understanding of DNS, Load Balancing, SSL, TCP/IP, and general networking and security best practices will be advantageous in this role.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a software developer at our company, you will play a vital role in all stages of the software development lifecycle. Your responsibilities will include designing, implementing, and maintaining Java-based applications that are capable of handling high volume and low latency. You will analyze user requirements to define business objectives, envision system features and functionality, and identify and resolve any technical issues that may arise. In this role, you will also be expected to create detailed design documentation, propose changes to the current Java infrastructure, develop technical designs for application development, and conduct software analysis, programming, testing, and debugging. Additionally, you will manage Java and Java EE application development, develop documentation to assist users, and transform requirements into stipulations. You will also prepare and produce releases of software components and support continuous improvement by investigating alternatives and technologies. We are looking for candidates who have experience with Advanced Java 8 or more, at least 5 years of experience with Java, and proficiency in Spring Boot, JavaScript, Hibernate, REST API, Web Services, Background Services, GIT, Strong knowledge in OOPS, PostgreSQL / Oracle / SQL Server. Experience with AWS Lamba, EC2, S3 is also preferred. At our company, we are driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Working with us comes with several perks, including clear objectives to ensure alignment with our mission, abundant opportunities for engagement with customers, product managers, and leadership, and insightful guidance from managers through ongoing feedforward sessions. You will have the opportunity to cultivate and leverage connections within diverse communities of interest, embrace continuous learning and upskilling opportunities through Nexversity, and enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Join our team to contribute to the development of hyper-personalized solutions for high-growth enterprises and help transform visions into reality. Embrace a culture of continuous learning, innovation, and collaboration to tailor your growth with us.,
Posted 1 month ago
8.0 - 12.0 years
0 - 0 Lacs
pune, maharashtra
On-site
As a Founding Software Engineer at Destro, you will play a pivotal role in unifying and enhancing our existing MVP codebases to create a robust orchestration platform that will serve as the foundational infrastructure for Fortune 500 customers. Your primary responsibility will involve auditing, refactoring, and consolidating multiple MVP codebases into a single, reliable product. Additionally, you will architect a flexible platform that can seamlessly support both cloud-native and on-premises deployments while designing and implementing CI/CD pipelines for enterprise rollouts with high confidence. Your expertise will be crucial in building containerization and orchestration workflows using technologies such as Docker and Kubernetes. You will also focus on fortifying the platform for enhanced uptime, failover capabilities, and disaster recovery measures. Implementing robust security protocols, authentication mechanisms, and role-based access control will be essential to ensure enterprise readiness. Collaboration with robotics, optimization, and product teams will be a key aspect of your role, as you work towards ensuring optimal performance in real-world environments. You are expected to be proficient in a diverse tech stack that includes Python (FastAPI), Node.js, React, TypeScript, Docker, Kubernetes, Terraform, AWS services, Linux, PostgreSQL, Redis, gRPC, MQTT, among others. Any experience with Robotics, IoT, RTLS deployments, or wearable UX will be considered a bonus. The ideal candidate for this position should possess a minimum of 8 years of experience in full-stack development and DevOps, preferably in a high-caliber company or startup environment. Prior experience in shipping and supporting enterprise software, particularly in on-prem or hybrid environments, will be highly valued. Your ability to develop solutions for latency-sensitive, resource-constrained, and hardware-adjacent systems will be critical to the success of this role. You should demonstrate a strong commitment to reliability, observability, and maintainability while excelling in ambiguous situations, taking ownership of outcomes, and operating at a fast pace without compromising critical functionalities. If you are passionate about making a real impact through mission-driven work, eager to contribute to a startup's success, and keen on working alongside a talented team with diverse backgrounds, Destro offers an exciting opportunity for you to shape the future of automation at scale.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a skilled Serverless Developer with 5-7 years of experience, located in Pune and working in the shift timing of 5:30 PM - 2:30 AM. Your expertise lies in AWS Lambda and TypeScript, enabling you to design and develop scalable, cloud-native applications. Your role involves leveraging various AWS services like API Gateway, DynamoDB, S3, SQS/SNS, and EventBridge to build robust solutions. Your responsibilities include designing, developing, and deploying serverless applications using AWS Lambda, TypeScript, and Node.js. You will be implementing Infrastructure as Code using tools like AWS CDK, Serverless Framework, and Terraform. Additionally, you will build and maintain CI/CD pipelines, ensure monitoring/logging using CloudWatch and X-Ray, and write unit and integration tests using Jest and Mocha. Collaboration with DevOps and architecture teams is crucial to ensure scalable and secure cloud environments. Troubleshooting and resolving application and infrastructure issues in a timely manner are also part of your responsibilities. Your key skills include proficiency in TypeScript and Node.js, along with hands-on experience in AWS services such as Lambda, API Gateway, DynamoDB, S3, SQS/SNS, and EventBridge. You are adept at using Infrastructure as Code tools like AWS CDK, Serverless Framework, and Terraform, as well as testing tools like Jest and Mocha. Your expertise extends to monitoring and DevOps tools like CI/CD, CloudWatch, and X-Ray. Bonus skills like GraphQL, ECS/Fargate, and AWS Certifications are considered advantageous for this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You are a talented and innovative AI/ML Engineer with a strong background in machine learning and deep learning. You have a keen interest in Generative AI (GenAI) and are excited to explore, implement, and scale GenAI models. Your primary responsibilities include designing and deploying end-to-end AI/ML solutions, experimenting with cutting-edge GenAI frameworks and tools, and collaborating with cross-functional teams to solve business problems using ML solutions. Your key responsibilities involve designing and developing robust ML models, including traditional and deep learning approaches. You will also build, train, fine-tune, and deploy Generative AI models for various use cases such as text, image, or code generation. Utilizing AWS services like SageMaker, Lambda, EC2, S3, and Glue to build scalable AI/ML solutions is an essential part of your role. Additionally, you will create and automate ML pipelines using CI/CD and MLOps best practices, conduct data preprocessing, feature engineering, and EDA for model development, and ensure model performance, fairness, and explainability throughout the lifecycle. To excel in this role, you should hold a Bachelor's/Masters degree in Computer Science, AI/ML, Data Science, or a related field and have at least 3 years of hands-on experience in machine learning and deep learning using Python with TensorFlow, PyTorch, Hugging Face, etc. Strong hands-on experience with AWS AI/ML services, experience in deploying LLMs or transformer-based models in production, and familiarity with GenAI tools like Hugging Face Transformers, LangChain, OpenAI API, Bedrock, or LLamaIndex are required. You should also possess working knowledge of REST APIs, microservices, and containerized deployments, proficiency in MLOps tools such as MLflow, SageMaker Pipelines, or Kubeflow, and strong communication skills to present complex ML concepts clearly. Preferred qualifications include experience with prompt engineering, fine-tuning, or RAG techniques, AWS Machine Learning Specialty certification or equivalent, exposure to NLP, computer vision, or multi-modal GenAI models, and contributions to open-source GenAI projects or research. Stay updated with the latest trends in GenAI and propose innovative solutions using LLMs or transformer-based architectures to drive continuous improvement in AI/ML solutions.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
The Business Analyst, Business Intelligence position at Bloom Energy offers you the opportunity to be a part of a company that is revolutionizing the way energy is generated and delivered globally. Bloom Energy is dedicated to providing clean, reliable, and affordable energy solutions through its innovative Energy Server technology. As a Business Analyst, you will be instrumental in supporting the Business Intelligence Senior Manager in Mumbai, India. Your main responsibilities will include developing automated tools and dashboards to enhance visibility and accuracy of P&L line items, collaborating with the leadership team to enhance forecasting tools and ensure precise P&L forecasts, liaising with the finance team to track actuals versus forecasts, fulfilling ad hoc requests for data analysis and scenario planning from the operations team, conducting in-depth analysis of costs to provide profitability insights to the leadership, and working in conjunction with the IT team to create production-ready tools for automating Services P&L. To excel in this role, you should possess strong analytical and problem-solving skills, be proficient in Python, Excel, and Powerpoint, have experience in financial planning & forecasting, be skilled in dashboarding tools such as Tableau, be familiar with databases/datalakes like PostgreSQL, Cassandra, AWS RDS, Redshift, and S3, and have knowledge of version control software like Git. The ideal candidate will hold a Bachelor's degree in Business Management, Data Analytics, Computer Science, Industrial Engineering, or related fields. Join Bloom Energy in its mission towards a 100% renewable future and contribute to providing resilient electricity solutions that can withstand various challenges. Bloom Energy's fuel-flexible technology has demonstrated its reliability in the face of natural disasters and power disruptions, without emitting harmful local air pollutants. Additionally, Bloom is spearheading the transition to renewable fuels like hydrogen and biogas with cutting-edge solutions in hydrogen power generation and electrolyzers. Our diverse range of clients includes manufacturing facilities, data centers, healthcare institutions, retail outlets, low-income housing projects, and educational institutions.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
You are looking for a candidate who is enthusiastic about working in a Startup environment and capable of building things from scratch individually. The ideal candidate should have past experience in developing scalable consumer-facing applications, with expertise in managing latency and traffic. As a FullStack Individual Contributor, you should be able to code at speed and take complete ownership of the product. With a minimum of 8 years of experience, the job is located in Vile Parle (East), Mumbai. The required skill set includes 8+ years as a full stack Java/JavaScript Developer. Proficiency in Micro Services, Distributed Systems, and Cloud Services such as AWS (EC2, S3, Lambda, Load Balancing, Serverless), and Kubernetes is essential. In terms of programming, you should have a strong background in Backend technologies like Node.js, MongoDB, Java Spring, and PostGreSQL. Familiarity with FrontEnd technologies like React.js and Micro frontend, as well as Queuing systems like Kafka, is preferred. Experience with Agile Scrum methodologies is a requirement. Your responsibilities will include end-to-end coding, from software architecture to managing scaling of high throughput (100,000) RPS high volume transactions. You will be required to discuss business requirements and timelines with management, create task plans for junior members, and oversee the day-to-day activities of the team. Mentoring the team on best coding practices, ensuring timely delivery of modules, managing security vulnerabilities, and being a full individual contributor are key aspects of the role. The ideal candidate should have a passion for tech innovation, problem-solving abilities, a "GoGetter" attitude, and be extremely humble and polite. Experience in Product companies and managing small teams would be a plus.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You are a skilled Serverless Developer with 5-7 years of experience, proficient in AWS Lambda and TypeScript. Your main responsibility will be to design and develop scalable, cloud-native applications. You should be well-versed in multiple AWS services, building CI/CD pipelines, and collaborating with cross-functional teams to deliver high-quality software solutions. Your primary duties include designing, developing, and deploying serverless applications using AWS Lambda, TypeScript, and Node.js. You will utilize various AWS services like API Gateway, DynamoDB, S3, SQS/SNS, and EventBridge to create robust solutions. Implementing Infrastructure as Code using tools such as AWS CDK, Serverless Framework, and Terraform will be essential. Furthermore, you will be responsible for building and maintaining CI/CD pipelines, ensuring proper monitoring/logging using tools like CloudWatch and X-Ray, and writing unit and integration tests using Jest and Mocha. Collaboration with DevOps and architecture teams will be crucial to ensure scalable and secure cloud environments. You will also troubleshoot and resolve application and infrastructure issues promptly. Key Skills: - Languages/Frameworks: TypeScript, Node.js - AWS Services: Lambda, API Gateway, DynamoDB, S3, SQS/SNS, EventBridge - Infrastructure as Code: AWS CDK, Serverless Framework, Terraform - Testing Tools: Jest, Mocha - Monitoring & DevOps: CI/CD, CloudWatch, X-Ray - Bonus Skills: GraphQL, ECS/Fargate, AWS Certifications If you possess the required skills and experience, and are ready to work in Pune with a shift timing of 5:30 PM - 2:30 AM, and a notice period of Immediate to 30 Days, we look forward to receiving your application.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
jodhpur, rajasthan
On-site
As a Full-time Backend Developer, you will need to have at least 5 years of experience working with Python and hands-on experience with frameworks such as Flask, Django, or FastAPI. It is essential to be proficient in AWS services like Lambda, S3, SQS, and CloudFormation. Your expertise should also include working with relational databases like PostgreSQL or MySQL and familiarity with testing frameworks like Pytest or NoseTest. In addition, you should have a strong understanding of REST API development and JWT authentication, along with proficiency in version control tools such as Git. As a Full-time Frontend Developer, you should have a minimum of 3 years of experience with ReactJS and a thorough understanding of its core principles. It is crucial to have experience with state management tools like Redux Thunk, Redux Saga, or Context API, along with familiarity with RESTful APIs and modern front-end build pipelines and tools. Your skill set should include proficiency in HTML5, CSS3, and pre-processing platforms like SASS/LESS. Experience with modern authorization mechanisms like JSON Web Tokens (JWT) and familiarity with front-end testing libraries such as Cypress, Jest, or React Testing Library is highly desirable. Additionally, experience in developing shared component libraries will be considered a plus. Overall, the ideal candidate for this role should be well-versed in both backend and frontend development technologies, possess strong problem-solving skills, and be able to work effectively in a team environment.,
Posted 1 month ago
12.0 - 16.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for designing and implementing scalable, secure, and cost-effective cloud architectures. Your role will involve leading cloud migration initiatives and supporting the development of cloud-native applications. Collaboration with development, DevOps, and security teams is essential to deliver robust cloud solutions. You will be required to create and maintain architecture diagrams, documentation, and best practices. Evaluating and recommending cloud services and tools based on project and business needs will be part of your responsibilities. Automation using Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or ARM templates is a key aspect of this role. Ensuring compliance with security standards, governance policies, and regulatory frameworks is crucial. Monitoring cloud environments and troubleshooting issues to ensure high availability and performance are also part of your duties. Additionally, providing technical leadership and mentorship to engineers is expected. To qualify for this position, you should have 12+ years of experience in DevOps Cloud Architecture roles, with a focus on AWS. Your tasks will include developing, testing, and maintaining Terraform scripts to automate AWS infrastructure provisioning and scaling. Building, configuring, and optimizing CI/CD pipelines (e.g., Jenkins, GitLab CI/CD, AWS CodePipeline) to support continuous integration and delivery of applications will be a key responsibility. You will also be responsible for writing and managing Ansible playbooks for configuration management and automating deployment tasks across environments. Utilizing Python scripting to develop automation solutions for AWS services and enhance operational efficiency will be part of your role. Supporting application teams by implementing and troubleshooting CI/CD pipelines and optimizing release workflows for faster deployment cycles is also expected. Managing, configuring, and optimizing AWS resources including EC2, RDS, S3, VPC, IAM, and Lambda for day-to-day development and production workloads is a critical aspect of this position.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
At NiCE, challenges are not limited; rather, they are embraced to push the boundaries. The work culture is ambitious, transformative, and driven by a desire to succeed. If you are someone who thrives in such an environment, NiCE offers you an exceptional career opportunity that will ignite a passion within you. As a key contributor in the development of a multi-region, multi-tenant SaaS product, you will be working closely with the core R&D team. Utilizing technologies such as React, JavaScript, CSS, HTML, and AWS, you will be responsible for building scalable, high-performance products within a cloud-first, microservices-driven environment. Your impact will be significant as you undertake tasks such as developing new user-facing features using React.js, creating reusable components and front-end libraries, translating designs into high-quality code, optimizing components for maximum performance across various web-capable devices and browsers, and translating business requirements into technical specifications. To excel in this role, you are expected to have a Bachelor's degree in computer science, information technology, or a related field, along with 2 to 4 years of experience. Proficiency in JavaScript, CSS, HTML, and front-end languages is essential. Additionally, knowledge of AWS services like S3 and Lambda, familiarity with React tools (React.js, Redux, Flux), experience in user interface design, and understanding of React.js principles are required. Skills in writing unit tests, familiarity with ECMAScript specifications, RESTful APIs, and modern authorization mechanisms like JSON Web Token are also necessary. Knowledge of front-end build pipelines, code versioning tools, and experience with CICD pipelines will be advantageous. Joining NiCE means becoming a part of a dynamic, global company that values excellence and innovation. The work environment is fast-paced, collaborative, and encourages creativity. With endless internal career opportunities, NiCE offers a platform for continuous learning and growth across various roles, disciplines, and locations. If you are passionate, innovative, and eager to raise the bar constantly, NiCE could be the perfect fit for you. NiCE follows the NiCE-FLEX hybrid model, allowing for maximum flexibility in work arrangements. Employees work 2 days from the office and 3 days remotely each week. Office days focus on face-to-face meetings, fostering teamwork, collaborative thinking, innovation, and a vibrant atmosphere. NiCE, known for its software products used by over 25,000 global businesses, including 85 of the Fortune 100 corporations, excels in delivering exceptional customer experiences, combating financial crime, and ensuring public safety. With over 8,500 employees across 30+ countries, NiCE is recognized as a market leader in AI, cloud, and digital domains.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an AWS Cloud Engineer, your responsibilities will include migrating applications to the AWS cloud, understanding user requirements, and envisioning system features and functionality. You will be tasked with identifying bottlenecks and bugs, and providing recommendations for system solutions by evaluating the advantages and disadvantages of custom development. Your role will involve contributing to team meetings, troubleshooting development and production issues across various environments and operating platforms. Moreover, you will need to grasp architecture requirements and ensure effective design, development, validation, and support activities. It will be crucial to comprehend and analyze client requirements, refactor systems for workload migration or modernization to the cloud (specifically AWS), and oversee end-to-end feature development while addressing challenges encountered during implementation. Additionally, you will be responsible for creating detailed design artifacts, working on development, conducting code reviews, implementing validation and support activities, and promoting thought leadership within your technology specialization area. The ideal candidate for this role should possess expertise in containerization and microservices development on AWS. You should have a profound understanding of design issues and best practices, as well as solid knowledge of object-oriented programming. Familiarity with various design and architectural patterns, software development processes, and implementing automated testing platforms and unit tests is essential. Your hands-on experience in building applications using Java/J2EE, Springboot, and Python is highly valued, along with knowledge of RESTful APIs and the ability to design cloud-ready applications using cloud SDKs and microservices. Furthermore, exposure to cloud compute services such as VMs, PaaS services, containers, serverless computing, and storage services on AWS is beneficial. A good understanding of application development design patterns is also desired. In terms of competencies, effective verbal and written communication skills, the ability to engage with remote teams, high flexibility for travel, and the capacity to work autonomously or within a multi-disciplinary team environment are essential traits for success in this role.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a skilled Full Stack Developer, primarily focusing on backend development with Node.js and having a working knowledge of React.js. Your main responsibility will be to develop a custom enterprise platform that interfaces with SDLC tools like JIRA, Jenkins, GitLab, and others. This platform aims to streamline license and access management, automate administrative tasks, and provide robust dashboards and governance features. To excel in this role, you should have at least 4-6 years of professional development experience. Your expertise in Node.js should cover async patterns and performance tuning. Additionally, hands-on experience with AWS Lambda and serverless architecture is essential. You must also be adept at building integrations with tools like JIRA, Jenkins, GitLab, Bitbucket, etc. Knowledge of React.js for UI development and integration is required, along with a solid understanding of RESTful APIs, Webhooks, and API security. It would be beneficial if you have familiarity with Git and collaborative development workflows, exposure to CI/CD practices, and infrastructure as code. Experience with AWS services such as DynamoDB, S3, EventBridge, and Step Functions is a plus. Knowledge of enterprise SSO, OAuth2, or SAML is desirable. Prior experience in automating tool admin tasks and DevOps workflows will be advantageous, as well as an understanding of modern monitoring/logging tools like CloudWatch or ELK. Working in this role, you will have the opportunity to work on a transformative platform with direct enterprise impact. You will have the freedom to innovate and contribute to the automation of key IT and DevOps functions. This role will also expose you to modern architectures including serverless, microservices, and event-driven systems. You can expect a collaborative, outcome-oriented work culture that fosters growth and success.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
Job Description: As a Backend Lead with expertise in User Interface (UI) at GSPANN, you will play a crucial role in managing the complete software development process from conception to deployment. Your responsibilities will include maintaining and upgrading the software post-deployment, overseeing the end-to-end life cycle of software production, guiding software analysis, writing, building, and deployment processes, supervising automated testing, and providing feedback to the management during the development phase. Additionally, you will be involved in modifying and testing changes to previously developed programs. You should possess a minimum of 7 years of experience in developing enterprise-level applications utilizing Node, Typescript, JavaScript, HTML, CSS, and AWS. Prior exposure to working with AWS services such as S3 and Lambda is essential. Your expertise in REST API development will be a valuable asset to the team. Excellent verbal and written communication skills are necessary to effectively interact with both business and technical teams. The ability to collaborate seamlessly in a fast-paced, result-oriented environment is expected from you. If you are a passionate and talented professional looking to contribute to our growth trajectory, we welcome you to join our family at GSPANN in Bangalore on a full-time basis. Apply now and be a part of our dynamic team as we continue to innovate and excel in the tech industry. Published On: 20 March 2025 Share this job,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
punjab
On-site
You are a skilled and versatile NodeJS & Python Engineer sought after to join our dynamic team. Your primary responsibility will involve designing, developing, and maintaining robust server-side logic and APIs that underpin our suite of applications. Collaborating closely with front-end developers, cross-functional engineering teams, and product stakeholders, you will ensure the smooth integration of user-facing features with back-end functionality. Your expertise in NodeJS and proficiency in Python are essential for supporting and extending services written in both languages. Your adaptability and experience across various technologies will play a pivotal role in constructing scalable, high-performance, and secure applications for our diverse global user base. Escalon is a rapidly expanding company that offers vital back-office services, including accounting, HR, IT, to a multitude of clients worldwide. As a part of the engineering team, you will contribute to developing the tools and platforms that drive success and scalability for Escalon and its clients. Qualifications: - Bachelor's degree in Computer Science, Engineering, or related field, or 4+ years of enterprise software development experience in the absence of a degree. - Minimum 4 years of practical experience with NodeJS (JavaScript) within the Serverless framework. - Professional proficiency in Python (2+ years), particularly for back-end scripting and service development. - Solid grasp of object-oriented programming principles in both JavaScript and Python. - Experience with AWS serverless environment, encompassing Lambda, Fargate, S3, RDS, SQS, SNS, Kinesis, and Parameter Store. - Understanding of asynchronous programming patterns and challenges. - Knowledge of front-end technologies like HTML5 and templating systems. - Proficient in designing and developing loosely coupled serverless applications and REST APIs. - Extensive experience with SQL and database schema design. - Familiarity with service-oriented architecture (SOA) principles and microservices best practices. - Effective verbal and written communication skills. - Experience with modern software engineering practices, including version control (Git), CI/CD, unit testing, and agile development. - Strong analytical, problem-solving, and debugging skills. - Write reusable, testable, and efficient code in both NodeJS and Python. - Develop and maintain unit tests and automated testing coverage. - Integrate front-end elements with server-side logic in a Serverless architecture. - Design and implement low-latency, high-availability, and high-performance applications. - Ensure security, data protection, and adherence to compliance standards. - Build and consume RESTful APIs and microservices using AWS Lambda and related services. - Actively participate in code reviews, design discussions, and architecture planning. - Promote the use of quality open-source libraries, considering licensing and long-term support. - Leverage and enhance the existing CI/CD DevOps pipeline for code integration and deployment.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |