Jobs
Interviews

2993 Dynamodb Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

6 - 12 Lacs

Bengaluru, Karnataka

On-site

Job Overview: We are looking for Senior Backend Engineer with strong full stack development capabilities to join our growing engineering team. The ideal candidate will be an expert in Node.js, TypeScript, and AWS cloud services, with practical experience developing and maintaining scalable backend systems and modern web applications. This role will involve designing backend APIs, integrating frontend components, and delivering robust, secure, and high-performance solutions. Requirements: 4+ years of professional software development experience Deep expertise in Node.js, JavaScript, and TypeScript Strong experience building RESTful APIs and backend services Solid understanding of Microservices architecture and cloud-native development Proficiency in MongoDB or other NoSQL databases Familiarity with SQL databases such as PostgreSQL or MySQL Hands-on experience with AWS services including Lambda, API Gateway, DynamoDB, and S3 Familiarity with CI/CD pipelines, Git version control, and DevOps tools Strong problem-solving skills, analytical thinking, and debugging ability Excellent communication and collaboration skills Experience with React.js or React Native is a plus Exposure to Core Java and Java Full Stack development Familiarity with Agile/Scrum methodologies Certification in AWS is a plus Exposure to AI based development (Vibe coding) is preferred Soft Skills Problem-solving and logical thinking Attention to clean, maintainable code Good collaboration and communication skills Eagerness to learn and adapt quickly in a fast-paced environment Benefits Hands-on experience in full-stack development Exposure to real-world applications and live deployments competitive salary and benefits package. * Opportunity to be at the forefront of AI-driven application development, making a tangible impact. * Collaborative, supportive, and innovative work environment. * Chance to define and shape the future of AI-first coding. Pay: ₹600,000.00 - ₹1,200,000.00 per year Benefits: Health insurance About the Company AriveGuru Technology Solutions Pvt. Ltd Website: https://www.ariveguru.com Address: 139, 1st Floor, Sarvabhouma Nagara, MSRS Nagara, Next to IIM Bangalore, Bilekahalli, Bengaluru, Karnataka – 560076 Job Type: Full-time Pay: ₹600,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Application Question(s): Which Node.js frameworks have you worked with (e.g., Express, NestJS, Fastify)? Have you built RESTful or GraphQL APIs with Node.js? What libraries or tools did you use? Current Location? Current CTC? Expected CTC (ECTC)? Total Years of Experience/Relevant Years of Experience? Work Location: In person Expected Start Date: 22/07/2025

Posted 3 weeks ago

Apply

5.0 - 8.0 years

18 - 22 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Job Title: AWS Data Engineer Experience Required: 5+ Years Interested? Send your resume to: aditya.rao@estrel.ai Kindly include: Updated Resume Current CTC Expected CTC Notice Period / Availability (Looking only for Immediate Joiner) LinkedIn Profile Job Overview: We are seeking a skilled and experienced Data Engineer with a minimum of 5 years of experience in Python-based data engineering solutions, real-time data processing, and AWS Cloud technologies. The ideal candidate will have hands-on expertise in designing, building, and maintaining scalable data pipelines, implementing best practices, and working within CI/CD environments. Key Responsibilities: Design and implement scalable and robust data pipelines using Python and frameworks like Pytest and PySpark . Work extensively with AWS cloud services such as AWS CDK , S3 , Lambda , DynamoDB , EventBridge , Kinesis , CloudWatch , AWS Glue , and Lake Formation . Implement data governance and data security protocols, including handling of sensitive data and encryption practices . Develop microservices and APIs using FastAPI , GraphQL , and Pydantic . Design and maintain solutions for real-time streaming and event-driven architecture . Follow SDLC best practices , ensuring code quality through TDD (Test-Driven Development) and robust documentation. Use GitLab for version control, and manage deployment pipelines with CI/CD . Collaborate with cross-functional teams to align data architecture and services with business objectives. Required Skills: Proficiency in Python v3.6+ Experience with Python frameworks: Pytest , PySpark Strong knowledge of AWS tools & services Experience with FastAPI , GraphQL , and Pydantic Expertise in real-time data processing , eventing , and microservices Good understanding of Data Governance , Security , and LakeFormation Familiarity with GitLab , CI/CD pipelines , and TDD Strong problem-solving and analytical skills Excellent communication and team collaboration skills Preferred Qualifications: AWS Certification(s) (e.g., AWS Certified Data Analytics Specialty, Solutions Architect) Experience with DataZone , data cataloging , or metadata management tools Experience in high-compliance industries (e.g., finance, healthcare) is a plus

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Who We Are Addepar is a global technology and data company that helps investment professionals provide the most informed, precise guidance for their clients. Hundreds of thousands of users have entrusted Addepar to empower smarter investment decisions and better advice over the last decade. With client presence in more than 50 countries, Addepar’s platform aggregates portfolio, market and client data for over $7 trillion in assets. Addepar’s open platform integrates with more than 100 software, data and services partners to deliver a complete solution for a wide range of firms and use cases. Addepar embraces a global flexible workforce model with offices in New York City, Salt Lake City, Chicago, London, Edinburgh, Pune, and Dubai. The Role Did you know? Alternative investing has the potential to generate higher returns compared to traditional investments over the long term. AI and Machine Learning are revolutionizing the way alternative investments are managed and analyzed. Investors are using these technologies to gain insights, see opportunities, and optimize their investment strategies. Addepar is building solutions to support our clients' alternatives investment strategies. The alternatives data management product is a serverless, modular and terraformed stack. We're hiring a Senior Software Engineer to design, implement and deliver modern software solutions that ingest and process ML-extracted data. You will collaborate closely with cross-functional teams including data scientists and product managers to build intuitive solutions that revolutionize how clients experience alternatives operations. You will work closely with operations engineering on document-based workflow automation and peer engineering teams to define the tech stack. You will iterate quickly through cycles of testing a new product offering on Addepar. If you've crafted scalable systems, or worked with phenomenal teams on hard problems in financial data, or are just interested in solving really hard technical, critically important problems, come join us! What You’ll Do Architect, implement, and maintain engineering solutions to solve complex problems; write well-designed, testable code. Lead individual project priorities, achievements, and software releases. Collaborate with machine learning engineers to bring ML-extracted data into the backend stack of the application in Python or other languages. Collaborate with product managers and client teams on product requirements iterations, design feasibility and user feedback. Document software functionality, system design, and project plans; this includes clean, readable code with comments. Learn and promote engineering standard methodologies and principles. Who You Are Minimum 5+ years of professional software engineering experience In-depth knowledge of Java OR Python Experience with NoSQL databases Experience with serverless architecture IaC (infrastructure as code), preferably terraform Comfortable working with product management on complex features Solutions-oriented, with exceptional analytical and problem solving skills Experience with AWS is a must. Experience with DynamoDB, OpenSearch/Elasticsearch Familiarity with writing, debugging, and optimizing SQL queries Knowledge of front end development a plus but not required Experience in finance OR wealth tech is a plus. Important Note - This role requires working from our Pune office 3 days a week (Hybrid work model) Our Values Act Like an Owner - Think and operate with intention, purpose and care. Own outcomes. Build Together - Collaborate to unlock the best solutions. Deliver lasting value. Champion Our Clients - Exceed client expectations. Our clients’ success is our success. Drive Innovation - Be bold and unconstrained in problem solving. Transform the industry. Embrace Learning - Engage our community to broaden our perspective. Bring a growth mindset. In addition to our core values, Addepar is proud to be an equal opportunity employer. We seek to bring together diverse ideas, experiences, skill sets, perspectives, backgrounds and identities to drive innovative solutions. We commit to promoting a welcoming environment where inclusion and belonging are held as a shared responsibility. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. PHISHING SCAM WARNING: Addepar is among several companies recently made aware of a phishing scam involving con artists posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote “interviews,” and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from Addepar without a formal interview process. Additionally, Addepar will not ask you to purchase equipment or supplies as part of your onboarding process. If you have any questions, please reach out to TAinfo@addepar.com.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview Of The Role An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal Job Description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic Qualifications 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2994013

Posted 3 weeks ago

Apply

10.0 years

3 - 9 Lacs

Hyderābād

On-site

Job Title: AWS Lambda Developer Experience: 10+ years Key Responsibilities: Design and develop serverless applications using AWS Lambda and NodeJS. Integrate AWS services such as DynamoDB, API Gateway, and S3 to build scalable and high-performance solutions. Implement microservices architecture to ensure modular and maintainable code. Collaborate with cross-functional teams to translate business requirements into technical solutions. Optimize performance and scalability of serverless functions. Document codebase and maintain best practices in coding standards. Ensure continuous integration and deployment (CI/CD) processes are followed. Qualifications: Bachelor's degree in computer science, Engineering, or related field. 10+ years of experience in AWS Lambda development and serverless architecture. Proficient in NodeJS and DynamoDB. Strong understanding of microservices design principles. Experience with API integration and cloud-based services. Familiarity with CI/CD pipelines and version control systems. Excellent problem-solving skills and attention to detail. Strong communication skills and ability to work collaboratively in a team environment. Preferred Skills: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer) Experience with other AWS services (e.g., S3, API Gateway, CloudWatch) Experience with monitoring and logging tools (e.g., Datadog)

Posted 3 weeks ago

Apply

7.0 years

2 - 5 Lacs

Hyderābād

On-site

Principal DevOps Engineer – NOC (Nasuni Orchestration Center) About Nasuni Nasuni is a profitable, growing SaaS data infrastructure company reinventing enterprise file storage and data management in an AI-driven world. We power the data infrastructure of the world's most innovative enterprises. Backed by Vista Equity Partners, our engineers aren't working behind the scenes, they're building what's next with AI. Our platform lets businesses seamlessly store, access, protect, and unlock AI-driven insights from exploding volumes of unstructured file data. As an engineer here, you'll help build AI-powered infrastructure trusted by 900+ global customers, including Dow, Mattel, and Autodesk. Nasuni is headquartered in Boston, USA with offices in Cork-Ireland, London-UK and we are starting an India Innovation Center in Hyderabad India to leverage exuberant IT talent available in India. The company's recent Annual Revenue at $160M and is growing at 25% CAGR. Nasuni is reinventing enterprise file storage with patented innovation. We have a hybrid work culture. 3 days a week working from the Hyderabad office during core working hours and 2 days working from home. Job Description: We are excited to be growing the team that builds and maintains the Nasuni Orchestration Center and the SaaS portions of our product portfolio. This team provides the key services and supporting infrastructure in a modern cloud environment. Critical skills include familiarity with AWS, Linux systems, and Configuration Management, as well as the common DevOps activities, practices, and techniques found in highly automated environments. Candidates for this position will have supported high-scale REST API services as well as customer and internal-facing web applications. They understand the importance of quality and responsiveness in meeting customer expectations. Success in this position requires you to be a self-motivated team player and an open-minded individual contributor who can help the team reach its larger goals. Responsibilities: Support, maintain and enhance cloud infrastructure through Terraform, CloudFormation, Puppet, and Python. Contribute to maturing the DevOps and SRE practices within Nasuni by utilizing methodologies such as CI/CD, Agile, and Acceptance Test Driven Development. Take or share ownership of one or more large areas of the Nasuni Orchestration Center hosted at AWS. Practice Root Cause Analysis to determine the scope and scale of issue impact. Create epics/stories and construct automation to prevent problem recurrence. Develop repeatable tools and processes for automation, configuration, monitoring, and alerting. Participate in requirements analysis, design, design reviews and other work related to expanding Nasuni's functionality. Participate in 24/7 on-call rotation for production systems. Work with AWS technologies such as EC2, ECS, Fargate, Aurora, ElastiCache, DynamoDB, API Gateway, and Lambda. Collaboration with engineering management, product management and key stakeholders to understand requirements and translate them into technical specifications. Be recognized as an expert in 1 or more technical areas. Respond to critical customer raised incidents in a timely manner, perform root cause analysis and implement preventative measures to avoid future incidents. Provide technical leadership to more junior engineers. Mentor, provide guidance on best practices and career development. Drive all team members to implement industry's best practices for securing internet-facing applications. Lead efforts to continuously improve development processes, tools, and methodologies. Technical Skills Required: 7+ years production experience in an SLA-driven SaaS environment. Significant experience with Configuration Management (Terraform/CloudFormation), Agile Scrum, and CI/CD. Experience building, measuring, tuning, supporting, and reporting on high-traffic web services. Demonstrated ability to build AMIs with tools like packer and containers with tools like Docker Compose. Comfort following versioning and release strategies with git/GitHub Action, AWS Code Build/Code Pipeline/Code Deploy. Experience debugging AWS or other cloud infrastructure. Competence with AWS API libraries (boto3), bash, awscli, and scripting with Python. Experience with infrastructure configuration management and automation tools (such as Puppet, Packer, CloudFormation, Terraform) as well as use of containers in CI/CD pipelines and production environments. Knowledge of the principles found in the Google SRE book and how to apply them. College experience in a related discipline (advanced degrees welcome). Bonus points for: Activity with open-source communities Familiarity with SQL and NoSQL databases AWS/Azure or other major cloud vendor certification Excellent problem solving and troubleshooting skills. Experience working in an agile development environment, and a solid understanding of agile methodologies. Strong communication and leadership skills, with the ability to mentor and inspire colleagues. Demonstrable experience testing and asserting the quality of the work you produce through writing unit, integration and smoke tests. Experience: BE/B.Tech, ME/M.Tech in computer science (or) Electronics and Communications (or) MCA 12 to 15 years previous experience in the industry. Why Work at Nasuni – Hyderabad? As part of our commitment to your well-being and growth, Nasuni offers competitive benefits designed to support every stage of your life and career: Competitive compensation programs Flexible time off and leave policies Comprehensive health and wellness coverage Hybrid and flexible work arrangements Employee referral and recognition programs Professional development and learning support Inclusive, collaborative team culture Modern office spaces with team events and perks Retirement and statutory benefits as per Indian regulations To all recruitment agencies: Nasuni does not accept agency resumes. Please do not forward resumes to our job boards, Nasuni employees or any other company location. Nasuni is not responsible for any fees related to unsolicited resumes. Nasuni is proud to be an equal opportunity employer. We are committed to fostering a diverse, inclusive, and respectful workplace where every team member can thrive. All qualified applicants will receive consideration for employment without regard to race, religion, caste, color, sex, gender identity or expression, sexual orientation, disability, age, national origin, or any other status protected by applicable laws in India or the country of employment. We celebrate individuality and are committed to building a workplace that reflects the diversity of the communities we serve. If you require accommodation during the recruitment process, please let us know. This privacy notice relates to information collected (whether online or offline) by Nasuni Corporation and our corporate affiliates (collectively, "Nasuni") from or about you in your capacity as a Nasuni employee, independent contractor/service provider or as an applicant for an employment or contractor relationship with Nasuni.

Posted 3 weeks ago

Apply

5.0 years

2 - 5 Lacs

Hyderābād

On-site

Senior DevOps Engineer – NOC ( Nasuni Orchestration Center ) About Nasuni Nasuni is a profitable, growing SaaS data infrastructure company reinventing enterprise file storage and data management in an AI-driven world. We power the data infrastructure of the world's most innovative enterprises. Backed by Vista Equity Partners, our engineers aren't working behind the scenes — they're building what's next with AI. Our platform lets businesses seamlessly store, access, protect, and unlock AI-driven insights from exploding volumes of unstructured file data. As an engineer here, you'll help build AI-powered infrastructure trusted by 900+ global customers, including Dow, Mattel, and Autodesk. Nasuni is headquartered in Boston, USA with offices in Cork-Ireland, London-UK and we are starting an India Innovation Center in Hyderabad India to leverage exuberant IT talent available in India. The company's recent Annual Revenue at $160M and is growing at 25% CAGR. We have a hybrid work culture. 3 days a week working from the Hyderabad office during core working hours and 2 days working from home. Job Description: We are excited to be growing the team that builds and maintains the Nasuni Orchestration Center and the SaaS portions of our product portfolio. This team provides the key services and supporting infrastructure in a modern cloud environment. Critical skills include familiarity with AWS, Linux systems, and Configuration Management, as well as the common DevOps activities, practices, and techniques found in highly automated environments. Candidates for this position will have supported high-scale REST API services as well as customer and internal-facing web applications. They understand the importance of quality and responsiveness in meeting customer expectations. Success in this position requires you to be a self-motivated team player and an open-minded individual contributor who can help the team reach its larger goals. Responsibilities: Support, maintain and enhance cloud infrastructure through Terraform, CloudFormation, Puppet, and Python. Contribute to maturing the DevOps and SRE practices within Nasuni by utilizing methodologies such as CI/CD, Agile, and Acceptance Test Driven Development. Take or share ownership of one or more large areas of the Nasuni Orchestration Center hosted at AWS. Practice Root Cause Analysis to determine the scope and scale of issue impact. Create epics/stories and construct automation to prevent problem recurrence. Develop repeatable tools and processes for automation, configuration, monitoring, and alerting. Participate in requirements analysis, design, design reviews and other work related to expanding Nasuni's functionality. Participate in 24/7 on-call rotation for production systems. Work with AWS technologies such as EC2, ECS, Fargate, Aurora, ElastiCache, DynamoDB, API Gateway, and Lambda. Be recognized as an expert in 1 or more technical areas. Respond to critical customer raised incidents in a timely manner, perform root cause analysis and implement preventative measures to avoid future incidents. Work with the team to implement industry's best practices for securing internet-facing applications. Continuously improve development processes, tools, and methodologies. Technical Skills Required: 5+ years production experience in an SLA-driven SaaS environment. Significant experience with Configuration Management (Terraform/CloudFormation), Agile Scrum, and CI/CD. Experience building, measuring, tuning, supporting, and reporting on high-traffic web services. Demonstrated ability to build AMIs with tools like packer and containers with tools like Docker Compose. Comfort following versioning and release strategies with git/GitHub Action, AWS Code Build/Code Pipeline/Code Deploy. Experience debugging AWS or other cloud infrastructure. Competence with AWS API libraries (boto3), bash, awscli, and scripting with Python. Experience with infrastructure configuration management and automation tools (such as Puppet, Packer, CloudFormation, Terraform) as well as use of containers in CI/CD pipelines and production environments. Knowledge of the principles found in the Google SRE book and how to apply them. College experience in a related discipline (advanced degrees welcome). Bonus points for: Activity with open-source communities Familiarity with SQL and NoSQL databases AWS/Azure or other major cloud vendor certification Excellent problem solving and troubleshooting skills. Experience working in an agile development environment, and a solid understanding of agile methodologies. Strong communication skills. Demonstrable experience testing and asserting the quality of the work you produce through writing unit, integration and smoke tests. Experience: BE/B.Tech, ME/M.Tech in computer science (or) Electronics and Communications (or) MCA 7 to 10 years previous experience in the industry. Why Work at Nasuni – Hyderabad? As part of our commitment to your well-being and growth, Nasuni offers competitive benefits designed to support every stage of your life and career: Competitive compensation programs Flexible time off and leave policies Comprehensive health and wellness coverage Hybrid and flexible work arrangements Employee referral and recognition programs Professional development and learning support Inclusive, collaborative team culture Modern office spaces with team events and perks Retirement and statutory benefits as per Indian regulations To all recruitment agencies: Nasuni does not accept agency resumes. Please do not forward resumes to our job boards, Nasuni employees or any other company location. Nasuni is not responsible for any fees related to unsolicited resumes. Nasuni is proud to be an equal opportunity employer. We are committed to fostering a diverse, inclusive, and respectful workplace where every team member can thrive. All qualified applicants will receive consideration for employment without regard to race, religion, caste, color, sex, gender identity or expression, sexual orientation, disability, age, national origin, or any other status protected by applicable laws in India or the country of employment. We celebrate individuality and are committed to building a workplace that reflects the diversity of the communities we serve. If you require accommodation during the recruitment process, please let us know. This privacy notice relates to information collected (whether online or offline) by Nasuni Corporation and our corporate affiliates (collectively, "Nasuni") from or about you in your capacity as a Nasuni employee, independent contractor/service provider or as an applicant for an employment or contractor relationship with Nasuni.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

2 - 8 Lacs

Hyderābād

On-site

Hyderabad, Telangana Job ID 30176626 Job Category Digital Technology Job Title – Lead Engineer Preferred Location: Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. Role Summary: Lead Engineer(Full Stack) is crucial role in product development team at Carrier. This role would focus on design and development of Backend & Frontend modules by following Carrier software development standards . Role Responsibilities: Design, develop AWS IoT/Cloud-based applications using Typescript, Node.Js, ReactJS Work closely with onsite, offshore, and cross functional teams, Product Management, frontend developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery Work closely with solutions architects on low level design. Effectively plan and delegate the sprint work to the development team while also contributing individually. Proactively Identify risks and failure modes early in the development lifecycle and develop POCs to mitigate the risks early in the program This individual is self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Should be able to help and direct junior developers in a right direction if needed Participate in peer code reviews to ensure that respective developers are following highest standards in implementing the product. Participate in PI planning and identify any challenges in terms of technology side to implement specific Epic/Story. Keep an eye on NFR’s and ensure our product is meeting all required compliances as per Carrier standards. Minimum Requirements: 6-10 years of overall experience in Software domain At least 4 years of experience in Cloud native applications in AWS Solid working knowledge of Typescript, NodeJS, ReactJS Experience in executing CI/CD processes Experience in developing APIs [REST, GraphQL, Websockets]. Knowledge of (AWS IoT Core) and In-depth knowledge of AWS cloud native services including Kinesis, DynamoDB, Lambda, API Gateway, Timestream, SQS, SNS, Cloudwatch Solid understanding of creating AWS infra using serverless framework/CDK. Experience in implementing alerts and monitoring to support smooth opera tions. Solid understanding of Jest framework (unit testing) and integration tests. Experience in cloud cost optimization and securing AWS services. Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.

Posted 3 weeks ago

Apply

0 years

3 - 5 Lacs

Chennai

On-site

Role : Database Administrator Location: Chennai Full time Opportunity Primary Skills: PostgreSQL, MySQL JD: Database Administration : Manage, monitor, optimize, and support PostgreSQL, MySQL,SQL Server and Redis (ElastiCache) in production and development environments. On-Call Support : Provide 16/7 on-call support for database incidents, ensuring high availability and reliability. AWS knowledge : Deploy and manage AWS database services like RDS, Aurora, DynamoDB, ElastiCache, and Redshift. Infrastructure as Code (IaC) : Automate database provisioning and management using Terraform. Performance Optimization : Tune queries, indexing, partitioning, and troubleshoot slow queries and replication issues. Database Releases & Schema Changes : Implement Flyway, Liquibase, or Alembic for zero-downtime database migrations. Multi-Database Support: Learn and provide support for SQL and NoSQL databases beyond PostgreSQL/MySQL. Automation : Use Prometheus, CloudWatch for database health checks and automation. Collaboration & Documentation : Work with developers, DevOps, and SRE teams to support database needs and document best practices. Thanks & Regards Ramdas Sakthivel | Sr.Technical Recruiter Arthur Grand Technologies Inc ramdas@arthurgrand.com Job Type: Full-time Schedule: Day shift Work Location: In person

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: DotNet with AWS Developer Location: Noida, Pune, Mumbai, Bengaluru, Hyderabad & Chennai Notice Period: Immediate Exp: 6 to 12 Yrs Summary We are seeking a skilled AWS + .Net Developer to join our dynamic team. The ideal candidate will have strong experience in .NET Core, C#, and AWS services, with a proven track record of developing and integrating applications using CI/CD pipelines. This role involves full lifecycle development, from requirements analysis to deployment and maintenance. Responsibilities Develop and integrate requirements using CI/CD code pipeline with GitHub. Participate in full development lifecycle including requirements analysis and design. Serve as technical expert on development projects. Write technical specifications based on conceptual design and business requirements. Support, maintain, and document software functionality. Identify and evaluate new technologies for implementation. Analyze code to find causes of errors and revise programs as needed. Participate in software design meetings and analyze user needs to determine technical requirements. Consult with end users to prototype, refine, test, and debug programs. Conduct complex and vital work critical to the organization. Work independently with complete latitude for judgment. Mentor less experienced peers and display leadership as needed. Required Skills .NET Core, C#, AWS SDK Experience with NoSQL databases like MongoDB and AWS DynamoDB Proficient in working with JIRA Use of Microsoft .NET Framework and supported programming languages Strong understanding of AWS services including EC2, ECS, Lambda, SNS, SQS, EventBridge, DynamoDB, CloudWatch Backend development using C# and .Net Core (API, WS) Version control using Git with Copilot Preferred Skills UI development experience with Angular 8+ Working experience with Confluence, Lucid portal, and ServiceNow Tools & Technologies GitHub Desktop Visual Studio Code Visual Studio IDE (Professional 2022 with GitHub Copilot) Teams and Outlook for communication Soft Skills Strong communication skills in English Ability to complete tasks within estimated time by the team

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue Resolution: Working on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue Resolution: Collaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology Integration: Being eager to learn new technologies and implementing the same in feature development. Preferred Education Master's Degree Required Technical And Professional Expertise Experience of technologies like Spring boot, JAVA Demonstrated technical leadership experience on impact customer facing projects. Experience in building web Applications in Java/J2EE stack, experience in UI framework such as REACT JS Working knowledge of any messaging system (KAFKA preferred) Experience designing and integrating REST APIs using Spring Boot. Preferred Technical And Professional Experience Strong experience in Concurrent design and multi-threading - General Experience, Object Oriented Programming System (OOPS), SQL Server/ Oracle/ MySQL, working knowledge of Azure or AWS cloud. Preferred Experience in building applications in a container-based environment (Docker/Kubernetes) on AWS Cloud. Basic knowledge of SQL or NoSQL databases (Postgres, MongoDB, DynamoDB preferred) design and queries.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chandigarh, India

On-site

Organization: Escalon Business Services Pvt. Ltd Location: Mohali - (On-site) Shift : Day Company’s Link : http://escalon.services/ Instagram Handel : https://instagram.com/escalon.india LinkedIn Handel : https://www.linkedin.com/company About the Role: We are looking for a skilled and motivated Full Stack Developer with a strong background in Node.js, Python, JavaScript, SQL, and AWS. The ideal candidate will have experience in designing and building scalable, maintainable, and high-performance applications, with a strong grasp of OOP concepts, serverless architecture, and microservices. Key Responsibilities: Design, develop, and maintain full-stack applications using Node.js, Python, and JavaScript. Develop RESTful APIs and integrate with microservices-based architecture. Work with SQL databases (e.g., PostgreSQL, MySQL) to design and optimize data models. Implement solutions on AWS cloud, leveraging services such as Lambda, API Gateway, DynamoDB, RDS, and more. Architect and build serverless and cloud-native applications Follow object-oriented programming (OOP) principles and design patterns for clean, maintainable code. Collaborate with cross-functional teams including Product, DevOps, and QA. Participate in code reviews, testing, and deployment processes. Ensure security, performance, and scalability of applications. Requirements: 3+ years of experience in full-stack development. Strong proficiency in Node.js, Python, and JavaScript. Experience with SQL databases and writing complex queries. Solid understanding of AWS services, especially in serverless architecture. Deep knowledge of OOP principles and software design patterns Familiarity with microservices architecture and distributed systems. Experience with CI/CD pipelines and version control (Git). Strong problem-solving and debugging skills.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role - AWS Cloud Engineer Experience - 4 to 8 Yrs Location - Chennai, Bangalore, Hyderabad, Mumbai, Indore JD – AWS Cloud Engineer Cloud Infrastructure: AWS services: EC2, S3, VPC, IAM, Lambda, RDS, Route 53, ELB, CloudFront, Auto Scaling Serverless architecture design using Lambda, API Gateway, and DynamoDB Containerization: Docker and orchestration with ECS or EKS (Kubernetes) Infrastructure as Code (IaC): Terraform (preferred), AWS CloudFormation Hands-on experience creating reusable modules and managing cloud resources via code Automation & CI/CD: Jenkins, GitHub Actions, GitLab CI/CD, AWS CodePipeline Automating deployments and configuration management Scripting & Programming: Proficiency in Python, Bash, or PowerShell for automation and tooling · Monitoring & Logging: o CloudWatch, CloudTrail, Prometheus, Grafana, ELK stack · Networking: o VPC design, Subnets, NAT Gateway, VPN, Direct Connect, Load Balancing o Security Groups, NACLs, and route tables · Security & Compliance: o IAM policies and roles, KMS, Secrets Manager, Config, GuardDuty o Implementing encryption, access controls, and least privilege policies

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role - AWS Cloud Engineer Experience - 4 to 8 Yrs Location - Chennai, Bangalore, Hyderabad, Mumbai, Indore JD – AWS Cloud Engineer Cloud Infrastructure: AWS services: EC2, S3, VPC, IAM, Lambda, RDS, Route 53, ELB, CloudFront, Auto Scaling Serverless architecture design using Lambda, API Gateway, and DynamoDB Containerization: Docker and orchestration with ECS or EKS (Kubernetes) Infrastructure as Code (IaC): Terraform (preferred), AWS CloudFormation Hands-on experience creating reusable modules and managing cloud resources via code Automation & CI/CD: Jenkins, GitHub Actions, GitLab CI/CD, AWS CodePipeline Automating deployments and configuration management Scripting & Programming: Proficiency in Python, Bash, or PowerShell for automation and tooling · Monitoring & Logging: o CloudWatch, CloudTrail, Prometheus, Grafana, ELK stack · Networking: o VPC design, Subnets, NAT Gateway, VPN, Direct Connect, Load Balancing o Security Groups, NACLs, and route tables · Security & Compliance: o IAM policies and roles, KMS, Secrets Manager, Config, GuardDuty o Implementing encryption, access controls, and least privilege policies

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the Team: The backend engineering team develops the digital backbone of the bank that drive all user experiences. They create robust, scalable, and secure backend systems. They are responsible for designing, developing, deploying, and monitoring all backend services in production. The team develops and maintains a strategic technology roadmap and ensures the application team implements best practices to ensure optimal performance, scalability, and availability of our systems. The team works very closely with the all the application development teams to ensure that all solutions are aligned with the business and technical requirements, and to enhance the overall quality and performance of the systems Get to know the Role: We are seeking talented & passionate Backend Engineer to join our team. You will have opportunities to work on multiple backend service clusters as well as participating in machine learning pipelines. It is very important that our team member take initiatives to identify problems, and have the right mindset and skill sets to solve them. The day-to-day activities/Responsibilities: Design and write with the cutting edge GO language to improve the availability, scalability, latency, and efficiency of Digibank’s range of services Work with engineering team to explore and create new design / architectures geared towards scale and performance Participate in code and design reviews to maintain our high development standards Engage in service capacity and demand planning, software performance analysis, tuning and optimization Collaborate with product and experience teams to define and prototype feature specifications Work closely with infrastructure team in building and scaling back-end services as well as performing root cause analysis investigations Design, build, analyze and fix large-scale systems Learn full stack performance tuning and optimization Debug and modify complex, production software The must haves/Qualificiations: A Degree in Computer Science, Software Engineering, Information Technology or related fields with strong Computer Science fundamentals in algorithms and data structures 1-4 years of experience in software engineering in a distributed systems environment Possess excellent communication, sharp analytical abilities with proven design skills, able to think critically of the current system in terms of growth and stability You can be a good coder in any language (C++, C, Java, Scala, Rust, Haskell, OCaml, Erlang, Python, Ruby, PHP, Node.JS, C# etc.), but willing to work on Golang Our Tech Stack: Our core services tech stack consists of Golang with Redis, MySQL, DynamoDB, Elasticsearch data stores as well as HAProxy load balancers. They all run on the AWS cloud infrastructure with auto-scaling abilities. Our mobile app platform coverage includes native iOS and Android, written in Swift and RxJava. Our Command Center front-end is built on Rails, HTML5, CSS and Javascript. We use GitHub for our code repository and we adhere to the basic Continuous Delivery tenets utilising a host of tools to support our release pipeline and code quality. These include Travis CI, New Relic, PullReview, Code Climate, Papertrail, Gemnasium, JFrog and Jenkins.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Note: We are not considering Azure-based experience for this role. Strong AWS exposure is mandatory. Job Description We are looking for a highly skilled Data Engineer with strong experience in AWS and modern data stack technologies to join our growing team. Must-Have Skills AWS (extensive hands-on experience) Snowflake (data warehousing) DBT (Data Build Tool) Strong SQL skills Good-to-Have (Preferred) Skills Python Fivetran NoSQL databases – DynamoDB or MongoDB Responsibilities Design, build, and maintain scalable and efficient data pipelines on AWS. Implement and manage data transformation workflows using DBT and Snowflake. Work closely with data analysts, product managers, and engineering teams to deliver clean, reliable, and structured data. Ensure data quality, security, and performance optimization across platforms. Integrate third-party data tools such as Fivetran when needed. Contribute to the design and architecture of modern data infrastructure. Skills: aws,nosql,nosql databases,data engineer,sql,dbt,snowflake,fivetran,python,dynamodb

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Us We’re on a mission to make health and wellness fairer and better for everyone by putting power and choice back into the hands of the people that make the difference - health and wellness practitioners. UNTIL is the home of growth - for the best health and wellness professionals and for exceptional talent. We’re passionate about helping people progress in their careers, providing them with opportunities, support, and exposure to the right experiences. For practitioners, we’re more than real estate - we’re access to the tools, community, and support they need to grow, scale, and thrive in their business. With three central London clubs (Soho, Liverpool Street, and Marylebone), and ambitious expansion plans in 2026, we’re redefining the wellness category by building a community where practitioners and team members can unlock their potential, collaborate, and grow. About the Role As a Senior Full Stack Engineer, you will work closely with the Technical Lead and product team to design, build, and scale key features for our platform. You will be involved in both frontend and backend development, with a strong focus on building scalable, maintainable, and high-performing systems. You’ll help implement new features, integrate third-party services, and ensure that our platform can support global expansion across multiple regions and languages. You will also be provided with AI-powered tools that accelerate workflows, such as GitHub Copilot, Cursor, and ChatGPT. Key Responsibilities: Full Stack Development: Develop and maintain both backend services and frontend interfaces using TypeScript, React, and Node.js. You will be responsible for creating a seamless, efficient experience for both users and internal teams. API Development & Integration: Work on API design and integration, including creating RESTful APIs and ensuring smooth integration with payment platforms like Stripe and other third-party services. Event-Driven Architecture: Build and maintain containerised or serverless systems using AWS Lambda, DynamoDB, EventBridge, and other AWS services to create resilient, scalable systems. Localization & Currency Handling: Develop solutions that handle localization and multi-currency requirements, ensuring the platform supports different languages and regions. Scalability & Performance: Focus on building systems that scale efficiently to support global user growth, ensuring performance and security at every level of the application. Collaboration & Mentorship: Collaborate closely with product managers, designers, and engineers to deliver high-quality features. AI-Powered Tools: Leverage AI-driven tools like GitHub Copilot, ChatGPT, and Cursor to accelerate development workflows, automate tasks, and enhance productivity. Required Skills and Experience: Full Stack Expertise: 5+ years of experience building web applications using React (frontend) and Node.js/TypeScript (backend). Cloud Architecture: Extensive experience with AWS services, particularly AWS Lambda, DynamoDB, and EventBridge, with a focus on building event-driven, serverless architectures. API Design & Integration: Strong experience in designing and consuming RESTful APIs, with a focus on integration with third-party services (e.g., Stripe for payment processing). Localization & Internationalization: Experience implementing localization features and handling multiple currencies based on user preferences and locale. CI/CD & Testing: Proficient in CI/CD workflows, using tools like Vitest or Jest for unit, integration and API testing. Version Control & Collaboration: Familiarity with Git and modern Git-based workflows (e.g., pull requests, code reviews, trunk-based development). Frontend Development: Experience with modern JavaScript frameworks like React or Vue, and familiarity with frontend state management (e.g., Redux or Context API). Cloud Development Tools: Experience using tools like AWS CDK or Terraform for cloud infrastructure automation and management. Desirable Skills: AI Tool Familiarity: Experience using AI-powered development tools like GitHub Copilot, Cursor, and ChatGPT to enhance workflows and productivity. Containerization: Experience with containerization tools like Docker and orchestrators like AWS ECS. Agile & Scrum: Familiarity with Agile methodologies and tools like Jira for project management.

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

noida, uttar pradesh

On-site

Job Title/Role: Teach Lead [Java & Python] Location: Noida/Delhi NCR Experience : 7- 10 yrs Roles & Responsibilities Understanding the clients business use cases and technical requirements and be able to convert them into technical solutions which elegantly meets the requirements Identifying different solutions and being able to narrow down the best option that meets the business requirements. Develop solution design considering various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects. Excellent communication and teamwork skills Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Should be confident, self-driven with a lot of initiative, and should have the zeal and energy to quickly ramp-up on upcoming technologies. Create and contribute to an environment that is geared to innovation, high productivity, high quality and customer service. Experience in communicating with end clients, business users, other technical teams and provide estimates. Qualification BTech. Or MCA in computer science More than 7 years of experience working as Java full stack technologist/ - software development, testing and production support & Python programming language and the FastAPI framework. Design / Development experience in Java technical stack, Java/J2EE, Design Patterns, Spring Framework-Core| Boot| MVC /Hibernate, JavaScript, CSS, HTML, Multithreading, Data Structures, Kafka and SQL Experience with data analytics tools and libraries such as pandas, NumPy, and scikit-learn Familiarity in Content Management Tools and, experience with integrating databases both relational (e.g. Oracle, PostgreSQL, MySQL) and non-relational databases (e.g. DynamoDB, Mongo, Cassandra) An in-depth understanding of Public/Private/Hybrid Cloud solutions and experience in securely integrating public cloud into traditional hosting/delivery models with a specific focus on AWS (S3, lambda, API gateway, EC2, Cloudflare) Working knowledge of Docker, Kubernetes, UNIX-based operating systems, and Micro-services Should have clear understating on continuous integration, build, release, code quality GitHub/Jenkins Should have an experience of managing teams and time bound projectsWorking in F&B Industry or Aerospace could be an added advantage,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a DevOps Engineer or AWS Cloud Engineer, you will be responsible for setting up AWS Infrastructure using Terraform Enterprise and Concourse (CI/CD) Pipeline. Your role will involve configuring and managing various tools and infrastructure components, with a focus on automation wherever possible. You will also be troubleshooting code issues, managing databases such as PostgreSQL, DynamoDB, and Glue, and working with different cloud services. Your responsibilities will include striving for continuous improvement by implementing continuous integration, continuous development, and constant deployment pipelines. Additionally, you will be involved in incident management and root cause analysis of AWS-related issues. To excel in this role, you should have a Master of Science degree in Computer Science, Computer Engineering, or a relevant field. You must have prior work experience as a DevOps Engineer or AWS Cloud Engineer, with a strong understanding of Terraform, Terraform Enterprise, and AWS infrastructure. Proficiency in AWS services, Python, PySpark, and Agile Methodology is essential. Experience working with databases such as PostgreSQL, DynamoDB, and Glue will be advantageous. If you are passionate about building scalable and reliable infrastructure on AWS, automating processes, and continuously improving systems, this role offers a challenging yet rewarding opportunity to contribute to the success of the organization.,

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

hyderabad, telangana

On-site

Position Overview: Platform Engineering team is looking for a Senior Backend Engineer to design, develop, and implement robust product APIs and event driven applications for Packaged Business Capabilities (PBCs). You'll be responsible for incorporating key features like rate resiliency, observability and rate limiting, to ensure our APIs are secure, perform, and monitored effectively. Responsibilities: Design, develop, and implement robust backend APIs using OpenAPI specs Integrate rate limiting, resiliency strategies, and observability practices Develop cloud-native APIs ensuring scalability, resilience and adherence to best practices Champion event-driven architecture for efficient data flow Leverage cloud engineering principles, preferably with AWS experience Utilize NoSQL databases to store and manage data efficiently Collaborate effectively with cross-functional teams (product, frontend, etc.) Qualifications Required Skills: Solid understanding of RESTful APIs and API design principles (OpenAPI) Experience implementing rate limiting, resiliency patterns, and observability techniques using Kubernetes/OpenShift/ Docker Proficiency in cloud engineering, preferably with AWS experience Strong programming skills, particularly in TypeScript and/or Golang Expertise in NoSQL databases DynamoDB/MongoDB & Caching Solutions Redis/ EKS Familiarity with event-driven architecture concepts Required Experience & Education: 1+years of experience in Technology A minimum of 0-1years of experience in backend engineering Excellent communication and collaboration skills Desired Experience: Exposure to AWS Healthcare experience including Disease Management Coaching of team members,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title – Lead Engineer Preferred Location: Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. Role Summary Lead Engineer(Full Stack) is crucial role in product development team at Carrier. This role would focus on design and development of Backend & Frontend modules by following Carrier software development standards . Role Responsibilities Design, develop AWS IoT/Cloud-based applications using Typescript, Node.Js, ReactJS Work closely with onsite, offshore, and cross functional teams, Product Management, frontend developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery Work closely with solutions architects on low level design. Effectively plan and delegate the sprint work to the development team while also contributing individually. Proactively Identify risks and failure modes early in the development lifecycle and develop POCs to mitigate the risks early in the program This individual is self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Should be able to help and direct junior developers in a right direction if needed Participate in peer code reviews to ensure that respective developers are following highest standards in implementing the product. Participate in PI planning and identify any challenges in terms of technology side to implement specific Epic/Story. Keep an eye on NFR’s and ensure our product is meeting all required compliances as per Carrier standards. Minimum Requirements 6-10 years of overall experience in Software domain At least 4 years of experience in Cloud native applications in AWS Solid working knowledge of Typescript, NodeJS, ReactJS Experience in executing CI/CD processes Experience in developing APIs [REST, GraphQL, Websockets]. Knowledge of (AWS IoT Core) and In-depth knowledge of AWS cloud native services including Kinesis, DynamoDB, Lambda, API Gateway, Timestream, SQS, SNS, Cloudwatch Solid understanding of creating AWS infra using serverless framework/CDK. Experience in implementing alerts and monitoring to support smooth opera tions. Solid understanding of Jest framework (unit testing) and integration tests. Experience in cloud cost optimization and securing AWS services. Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

should be able to write bash scripts for monitoring existing running infrastructure and report out. should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures, network connectivity, ingress, volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig, bash, ping, curl, ssh etc knowledge for using monitoring tools like splunk, cloudwatch, kube dashboard and create dashboards and alerts when and where needed. knowledge of aws vpc, subnetting, alb/nlb, egress/ingress knowledge of doing disaster recovery from prepared backups for dynamodb, kube volume storage, keyspaces etc (AWS Backup, Amazon S3, Systems Manager Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam, aws policy management, aws kms, kube rbac, etc. Understanding of best practices for security, access management, hybrid cloud, etc. Knowledge of advance kube concepts and tools like service mesh, cluster mesh, karpenter, kustomize etc Templatise infra IAC creation with pulumi and terraform, using advanced techniques for modularisation. Extend existing helm charts for repetitive stuff and orchestration, and write terraform/pulumi creation. Use complicated manual infrastructure setup with Ansible, Chef, etc. Certifications: ▪ AWS Certified Advanced Networking - Specialty ▪ AWS Certified DevOps Engineer - Professional (DOP-C02)

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

As a Senior Quality Assurance Engineer, you must be able to provide among these: Ability to work in an autonomous, self-responsible and self-organised way. 6+ years of experience in software testing, manual and automated Strong Experience working with modern test automation frameworks and tools (Cypress, Playwright, Jest, React testing libraries…). Strong experience in different testing practices (from unit to load to endurance to cross-platform) specifically integrated within CI/CD. Experience in continuous testing practices in production by leveraging BOT and virtual users Experience working with CI/CD pipelines and monitoring tools (e.g. Jenkins, TeamCity, Kibana, Grafana, etc.). Knowledge of API testing, REST protocol and microservice architecture concepts. Postman, AWS Able to effectively communicate in English. Comfortable in developing test automation frameworks from scratch and maintaining existing frameworks. Knowledge of software testing theory. As our Senior Quality Assurance Engineer, you embrace the following responsibilities: Take ownership and responsibility for the design and development of all aspects of testing. Work on acceptance criteria and test scenarios with the Product Owner and development team. Design, execute, and maintain test scenarios and automation capabilities for all test levels and types (e.g., automated, regression, exploratory, etc.). Create and optimize test frameworks and integrate them into deployment pipelines. Participate in the code review process for both production and test code to ensure all critical cases are covered. Monitoring test runs, application errors and performance. Making information flow, keeping the team informed and being a stakeholder in releases and defect tracking. Promote and coach the team towards a quality-focused mindset. Influence and lead the team towards continuous improvement and best testing practices. Be the reference of the QA Center of Practice, promoting their practices and influencing their strategy, bringing your team experience into their plan. These are some of the technologies/frameworks/practices we use: NodeJs with Typescript React and NextJS Contentful CMS Optimizely experimentation platform Micro-services, Event streams and file exchange CI/CD with Jenkins pipeline AWS and Terraform InfluxDB, Grafana, Sensu, ELK stack Infrastructure as a code, one-click deployment Docker, Kubernetes Amazon Web Services and cloud deployments (S3, SNS, SQS, RDS, DynamoDB, etc.), using tools such as Terraform or AWS CLI Git, Scrum, Pair Programming, Peer Reviewing InfluxDB, Kibana, Grafana, Sensu

Posted 3 weeks ago

Apply

0 years

0 Lacs

Krishnagiri, Tamil Nadu, India

On-site

Experience with cloud databases and data warehouses (AWS Aurora, RDS/PG, Redshift, DynamoDB, Neptune). Building and maintaining scalable real-time database systems using the AWS stack (Aurora, RDS/PG, Lambda) to enhance business decision-making capabilities. Provide valuable insights and contribute to the design, development, and architecture of data solutions. Experience utilizing various design and coding techniques to improve query performance. Expertise in performance optimization, capacity management, and workload management. Working knowledge of relational database internals (locking, consistency, serialization, recovery paths). Awareness of customer workloads and use cases, including performance, availability, and scalability. Monitor database health and promptly identify and resolve issues. Maintain comprehensive documentation for databases, business continuity plan, cost usage and processes. Proficient in using Terraform or Ansible for database provisioning and infrastructure management. Additional 'nice-to-have' expertise in Python, Databricks, Apache Airflow, Google Cloud Platform (GCP), and Microsoft Azure.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies