Jobs
Interviews

44412 Gcp Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

18 Lacs

india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

1.0 years

0 Lacs

india

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next IT Support Engineer L1 Who We Are & Why We’re Hiring Twilio powers real-time business communications and data solutions that help companies and developers worldwide build better applications and customer experiences. Although we're headquartered in San Francisco, we have presence throughout South America, Europe, Asia and Australia. We're on a journey to becoming a global company that actively opposes racism and all forms of oppression and bias. At Twilio, we support diversity, equity & inclusion wherever we do business. About The Job This position is needed to Responsibilities In this role, you’ll: Perform hardware/software troubleshooting and resolve user access issues. Ticket Management: Manage end-to-end ticket processes, including escalation, resolution, and closure. Achieve individual SLA and KPI scorecard targets. Provide empathetic and patient customer support. Assist with new hire onboarding and deliver basic training to end-users. Contribute to the IT Services knowledge base: Update/create articles to enhance team knowledge and address trending incidents. Collaborate with the automation team for support and issue resolution: Utilize scripts and automation tools to streamline problem resolution. Assist end-users in leveraging native AI products for problem-solving. Handle approximately 25 work items daily. Monitor servers, process events, and address issues. Support automation, Dev activities, incident management, and process improvement. Assist with Managing SSL Certs, Patching, PCI Audits, and pre-production deployment/testing. Qualifications Not all applicants will have skills that match a job description exactly. Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required Privacy Elements (Required): Understanding and adherence to data protection regulations. Proven commitment to upholding privacy standards. Implementation of privacy best practices. Integration of privacy considerations into application development and support processes. Familiarity with GDPR, CCPA, or other relevant privacy regulations. Required Business Competencies Must live and embody Twilio’s values. Wear our customer’s shoes and understand their perspective when helping with issues and requests. Customer-centric mindset, understanding, and empathizing with customer perspectives. Excellent communication skills, including written, verbal, and interpersonal abilities. Proficiency in supporting endpoint security standards and adhering to defined controls. 1-3 years of experience supporting business technologies or equivalent service background with personal technical experience. Passion for technology and staying up to date with industry trends. Strong customer service and critical thinking skills, utilizing available resources effectively. Growth mindset and positive attitude, particularly in ambiguous situations. Documentation skills, contributing to the knowledge base for self-service support. Experience using a ticketing system, such as ServiceNow. Strong organizational, time management, and communication skills. Flexibility to work in a 24/7 operation and travel between office locations as required. Humility, willingness to learn and grow from successes and failures. Adaptability to a fast-paced, changing environment. Focus on progress rather than perfection. Transparency, thoughtfulness, and collaboration in business practices. Willingness to help others and promote a collaborative work environment. Required Technical Skills And Experience 1+ years of IT-related engineering experience or equivalent education in a relevant technology field. Extensive experience with working with Apple, Windows ecosystems. Basic understanding of automation tools and processes. Exposure to native AI products and concepts. Strong problem-solving skills with a focus on finding efficient and automated solutions. Analytical mindset to troubleshoot and resolve technical issues. Excellent documentation skills and meticulous attention to detail. Basic knowledge of automation tools, scripts, and AI concepts. Desired Experience with Networking and ZTNA technologies such as Cisco, ZScaler, or GlobalProtect. Any foundation ITIL v4 Certification. Familiarity with scripting languages (Python, PowerShell, etc.). Knowledge of SDLC. Any Foundation AWS, GCP, or Azure cloud certifications. Experience working in a SAFe or similar environment. Apple Certified Support Professional or Microsoft equivalent. AI, ML, or Automation-centric certifications across AWS, Google, or Microsoft. Experience working with MDM tools such as JAMF and Kandji. Location This role will be based in our Bengaluru, India, office. Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, approximately travel is anticipated to help you connect in-person in a meaningful way. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 3 days ago

Apply

5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Cognite Embark on a transformative journey with Cognite, a global SaaS forerunner in leveraging AI and data to unravel complex business challenges through our cutting-edge offerings including Cognite Atlas AI, an industrial agent workbench, and the Cognite Data Fusion (CDF) platform. We were awarded the 2022 Technology Innovation Leader for Global Digital Industrial Platforms & Cognite was recognized as 2024 Microsoft Energy and Resources Partner of the Year. In the realm of industrial digital transformation, we stand at the forefront, reshaping the future of Oil & Gas, Chemicals, Pharma and other Manufacturing and Energy sectors. Join us in this venture where AI and data meet ingenuity, and together, we forge the path to a smarter, more connected industrial future. Learn More About Cognite Here Cognite Product Tour 2024 Cognite Product Tour 2023 Data Contextualization Masterclass 2023 Our values Impact : Cogniters strive to make an impact in all that they do. We are result-oriented, always asking ourselves. Ownership : Cogniters embrace a culture of ownership. We go beyond our comfort zones to contribute to the greater good, fostering inclusivity and sharing responsibilities for challenges and success. Relentless : Cogniters are relentless in their pursuit of innovation. We are determined and deliverable (never ruthless or reckless), facing challenges head-on and viewing setbacks as opportunities for growth. Role Summary As a QA Engineer in the GSS INDIA Deployment Pack Team, you will play a key role in validating the functionality and reliability of our deployment packs before they are delivered to customers or used in real-world projects. You will be responsible for designing, executing, and maintaining test cases and automation scripts to ensure the deployment packs are free from defects, meet business requirements, and are ready for production deployment. You will also participate in customer project deployments to validate and troubleshoot live use cases, ensuring the solutions we deliver are error-free and meet customer expectations. Key Responsibilities Design and execute test plans and test cases (manual and automated) for validating deployment pack functionalities. Validate integration between Cognite Data Fusion (CDF) components and third-party systems within the deployment packs. Simulate customer scenarios in staging environments to proactively identify and address issues before deployment. Participate in customer deployments to verify real-world execution of use cases. Collaborate with solution architects, developers, and customer teams to ensure alignment on requirements and solution expectations. Report bugs, issues, and suggest enhancements using Jira or other tools. Develop automated test scripts to improve regression coverage and CI/CD validation. Conduct root-cause analysis and provide timely resolutions or feedback loops to the development team. Maintain thorough documentation of test cases, test results, and release sign-offs. Contribute to continuous improvement of QA processes, test frameworks, and deployment validation methodologies. Required Qualifications & Skills 2–5 years of QA experience in software or SaaS environments, preferably within industrial or data platforms. Hands-on experience with test planning, execution, bug tracking, and test automation tools. Strong understanding of REST APIs, Python scripting, and data validation techniques. Familiarity with cloud platforms (Azure, AWS, or GCP) and DevOps concepts is a plus. Experience with tools like Postman, PyTest, Selenium, or equivalent testing frameworks. Strong analytical and problem-solving skills, with an attention to detail. Excellent communication skills and ability to collaborate across geographically distributed teams. Experience in customer-facing roles or project delivery is a strong advantage. Preferred Attributes Prior exposure to Cognite Data Fusion (CDF) or other industrial data platforms. Experience working in Agile/Scrum environments. Passion for clean, maintainable, and well-documented test suites. Self-driven, proactive, and capable of managing responsibilities with minimal supervision. Join the global Cognite community! 🌐 Join an organization of 70 different nationalities 🌐 with Diversity, Equality and Inclusion (DEI) in focus 🤝 Office location Rathi Legacy (Rohan Tech Park ) Hoodi (Bengaluru) A highly modern and fun working environment with sublime culture across the organization, follow us on Instagram @ cognitedata 📷 to know more Flat structure with direct access to decision-makers, with minimal amount of bureaucracy Opportunity to work with and learn from some of the best people on some of the most ambitious projects found anywhere, across industries Join our HUB 🗣️ to be part of the conversation directly with Cogniters and our partners. Hybrid work environment globally Why choose Cognite? 🏆 🚀 Join us in making a real and lasting impact in one of the most exciting and fastest-growing new software companies in the world. We have repeatedly demonstrated that digital transformation, when anchored on strong DataOps, drives business value and sustainability for clients and allows front-line workers, as well as domain experts, to make better decisions every single day. We were recognized as one of CNBC's top global enterprise technology startups powering digital transformation ! And just recently, Frost & Sullivan named Cognite a Technology Innovation Leader ! 🥇 Most recently Cognite Data Fusion® Achieved Industry First DNV Compliance for Digital Twins 🥇 Apply today! If you're excited about the opportunity to work at Cognite and make a difference in the tech industry, we encourage you to apply today! We welcome candidates of all backgrounds and identities to join our team. We encourage you to follow us on Cognite LinkedIn ; we post all our openings there.

Posted 3 days ago

Apply

8.0 - 13.0 years

13 - 18 Lacs

chennai

Work from Office

A Senior Solutions Architect at Amazon Web Services (AWS) designs and implements cloud architectures to help customers solve business challenges. They work with customers to: 18+ Years Experience as AWS Solution Architect Design and Implement AWS Cloud Landing AWS Control Tower Understand their business and technical requirements Translate those requirements into technical solutions Design and develop cloud solutions Migrate existing workloads to the cloud Optimize existing solutions Propose AWS new solutions Ensure solutions comply with industry standards and security Lead the technical solution design and creation of architecture blueprints. Perform systems modelling, simulation, and analysis to ensure the soundness of the solution. Review prototypes, solution blueprints, and project scope to ensure that stakeholders needs are being met. Support the analysis of the functionality and constraints of recommended solutions, and the impact and changes required across multiple systems, platforms and applications. Support risk identification and development of risk mitigation strategies associated with the end-to-end solution. Foster continuous improvement by looking for ways to improve the design processes, models and approach.

Posted 3 days ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

Job Reference # 318002BR Job Type Full Time Your role build advanced statistical models to generate meaningful and actionable insights, improve decision making, optimize the business process, and help address business problems. work on generative ai (genai) models to deliver personalized insights and solutions for clients, financial advisors, and prospects. leverage advanced data science techniques to analyze large datasets and extract actionable insights. collaborate with cross-functional teams to integrate predictive models into wealth management decision-making processes. implement and utilize tools like langchain, langgraph, and vector databases to build scalable, efficient ai-driven solutions. Your team You will be part of the Data Science STAAT Team, WMA Technology. Our Data Science team is at the heart of DAFP’s function to manage and support data science efforts across different business areas. Your expertise bachelor’s degree or higher in machine learning, computer science, computational linguistics, mathematics or other relevant technical field from premier institute. strong foundation in probability, statistics, linear algebra, calculus, machine learning and generative ai/llm. expertise in deep learning techniques and applications. strong understanding of generative ai (genai) including math behind the models and its practical implementations. hands-on experience working with large language model apis like azure openai, anthropic etc.. and cloud ecosystems like azure, aws, gcp, databricks proficiency in langchain, langgraph, and working with vector databases for advanced data retrieval and ai workflows. proficiency in python or similar programming languages, with experience in ai/ml libraries and frameworks such as tensorflow, pytorch, or hugging face. knowledge of financial products, including mutual funds, etfs, and client segmentation, is a plus. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 3 days ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

mumbai

Work from Office

Mandatory Skills: 1. 5+ Years of experience in the design & development of state-of-the-art language models; utilize off-the-shelf LLM services, such as Azure OpenAI, to integrate LLM capabilities into applications. 2. Deep understanding of language models and a strong proficiency in designing and implementing RAG-based workflows to enhance content generation and information retrieval. 3. Experience in building, customizing and fine-tuning LLMs via OpenAI studio extended through Azure OpenAI cognitive services for rapid PoCs 4. Proven track record of successfully deploying and optimizing LLM models in the cloud (AWS, Azure, or GCP) for inference in production environments and proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization. 5. Apply prompt engineering techniques to design refined and contextually relevant prompts for language models. 6. Monitor and analyze the performance of LLMs by experimenting with various prompts, evaluating results, and refining strategies accordingly. 7. Building customizable, conversable AI agents for complex tasks using CrewAI and LangGraph to enhance Gen AI solutions 8. Proficient in MCP (Model Context Protocol) for optimizing context-aware AI model performance and integration is a plus

Posted 3 days ago

Apply

5.0 years

0 Lacs

kochi, kerala, india

On-site

Minimum 5 years of experience in machine learning engineering and AI development. Deep expertise in machine learning algorithms and techniques like supervised/unsupervised learning, deep learning, reinforcement learning, etc. Solid experience in natural language processing (NLP) - language models, text generation, sentiment analysis, etc. Proven understanding of generative AI concepts like text/image/audio synthesis, diffusion models, transformers, etc. Expertise in Agentic AI and building real world applications using the same. Experience working with Agentic AI frameworks like Langgraph, ADK , Autogen etc. Hands-on experience in developing and deploying generative AI applications (text generation, conversational AI, image synthesis, etc.) Experience in MLOps and ML model deployment pipelines. Proficiency in programming languages like Python and ML frameworks like TensorFlow, PyTorch, etc. Knowledge of cloud platforms (AWS, GCP, Azure) and tools for scalable ML solution deployment. Experience with data processing, feature engineering, and model training on large datasets. Familiarity with responsible AI practices, AI ethics, model governance and risk mitigation. Understanding of software engineering best practices and applying them to ML systems. Experience with an agile development environment. Exposure in tools/framework to monitor and analyse ML model performance and data accuracy. Strong problem-solving, analytical, and communication abilities. Bachelor’s or master’s degree in computer science, AI, Statistics, Math or related fields. Responsibilities Design, develop and optimize machine learning models for applications across different domains. Build natural language processing pipelines for tasks like text generation, summarization, translation, etc. Develop and deploy cutting-edge generative AI & Agentic AI applications. Implement MLOps practices - model training, evaluation, deployment, monitoring, and maintenance. Integrate machine learning capabilities into existing products or build new AI-powered applications. Perform data mining, cleaning, preparation, and augmentation for training robust ML models. Collaborate with cross-functional teams to translate AI requirements into technical implementations. Ensure model performance, scalability, reliability. Continuously research and implement state-of-the-art AI/ML algorithms and techniques. Manage the end-to-end ML lifecycle from data processing to production deployment.

Posted 3 days ago

Apply

5.0 years

0 Lacs

kochi, kerala, india

On-site

Minimum 5 years of experience in machine learning engineering and AI development. Deep expertise in machine learning algorithms and techniques like supervised/unsupervised learning, deep learning, reinforcement learning, etc. Solid experience in natural language processing (NLP) - language models, text generation, sentiment analysis, etc. Proven understanding of generative AI concepts like text/image/audio synthesis, diffusion models, transformers, etc. Expertise in Agentic AI and building real world applications using the same. Experience working with Agentic AI frameworks like Langgraph, ADK, Autogen etc. Hands-on experience in developing and deploying generative AI applications (text generation, conversational AI, image synthesis, etc.) Experience in MLOps and ML model deployment pipelines. Proficiency in programming languages like Python, and ML frameworks like TensorFlow, PyTorch, etc. Knowledge of cloud platforms (AWS, GCP, Azure) and tools for scalable ML solution deployment. Experience with data processing, feature engineering, and model training on large datasets. Familiarity with responsible AI practices, AI ethics, model governance and risk mitigation. Understanding of software engineering best practices and applying them to ML systems. Experience with an agile development environment. Exposure in tools/framework to monitor and analyse ML model performance and data accuracy. Strong problem-solving, analytical, and communication abilities. Bachelor’s or master’s degree in computer science, AI, Statistics, Math or related fields. Responsibilities Design, develop and optimize machine learning models for applications across different domains. Build natural language processing pipelines for tasks like text generation, summarization, translation, etc. Develop and deploy cutting-edge generative AI & Agentic AI applications. Implement MLOps practices - model training, evaluation, deployment, monitoring, and maintenance. Integrate machine learning capabilities into existing products or build new AI-powered applications. Perform data mining, cleaning, preparation, and augmentation for training robust ML models. Collaborate with cross-functional teams to translate AI requirements into technical implementations. Ensure model performance, scalability, reliability. Continuously research and implement state-of-the-art AI/ML algorithms and techniques. Manage the end-to-end ML lifecycle from data processing to production deployment.

Posted 3 days ago

Apply

7.0 - 9.0 years

3 - 6 Lacs

bengaluru

Work from Office

Key points: Should have 2+ years of professional backend building experience Should have experience of working in Java/C++/Go with experience in Multithreading, object-oriented design patterns, microservices architecture Experience developing cloud architecture on leading cloud providers (Azure AWS/ GCP) is a must Prior work with LLM/ML applications would be a bonus.

Posted 3 days ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

chennai

Work from Office

Job Summary: We are looking for a skilled and experienced AEM Admin to join our team. The ideal candidate will have hands-on experience managing and administering Adobe Experience Manager (AEM) environments, as well as expertise in integrating AEM with DevOps tools. You will be responsible for maintaining AEM operations, handling the deployment and management of AEM environments, and optimizing the AEM platforms performance. Additionally, experience with CI/CD pipelines and AEM DevOps administration is highly preferred. Key Responsibilities: AEM Operations: Manage and monitor AEM environments, ensuring they are optimized and running smoothly. AEM Admin: Administer, configure, and troubleshoot AEM instances in both production and non-production environments. DevOps Integration: Work closely with DevOps teams to integrate AEM with various DevOps tools and ensure efficient automation of workflows. CI/CD Pipeline: Set up and manage Continuous Integration/Continuous Deployment (CI/CD) pipelines for seamless AEM code deployment and version control. System Monitoring: Proactively monitor AEM instances, identify performance bottlenecks, and ensure security compliance. Troubleshooting: Diagnose and resolve issues related to AEM, including performance, deployment, and integration problems. Collaboration: Work with developers, operations teams, and other stakeholders to ensure successful AEM implementation and smooth operation. Documentation: Create and maintain detailed documentation for AEM administration processes, configurations, and best practices. Required Skills and Qualifications: Minimum 2 years of experience as an AEM Admin with a strong understanding of AEM architecture, operations, and configuration. Hands-on experience with AEM operations, including monitoring, troubleshooting, and optimization. DevOps tools knowledge: Experience working with DevOps tools and technologies, including Jenkins, Docker, Kubernetes, and Git. Experience in setting up and managing CI/CD pipelines for AEM applications and code deployments. Strong understanding of AEM integration with DevOps practices and tools. Good knowledge of AEM versioning, clustering, replication, and scaling in cloud-based environments. Familiarity with Linux/Unix systems, shell scripting, and automation. Excellent problem-solving and troubleshooting skills, with the ability to handle critical situations effectively. Ability to work in a team-oriented, fast-paced environment with minimal supervision. AEM DevOps Admin experience is a strong plus. Preferred Skills: Familiarity with cloud-based AEM environments (e.g., AWS, Azure, Google Cloud). Knowledge of Adobe Cloud Manager and AEM as a Cloud Service. Strong communication skills and the ability to work collaboratively with cross-functional teams. Qualifications: Education: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience: 4 to 8 years of relevant work experience in AEM administration, DevOps, and CI/CD pipeline management.

Posted 3 days ago

Apply

5.0 years

0 Lacs

kochi, kerala, india

On-site

Minimum 5 years of experience in machine learning engineering and AI development. Deep expertise in machine learning algorithms and techniques like supervised/unsupervised learning, deep learning, reinforcement learning, etc. Solid experience in natural language processing (NLP) - language models, text generation, sentiment analysis, etc. Proven understanding of generative AI concepts like text/image/audio synthesis, diffusion models, transformers, etc. Expertise in Agentic AI and building real world applications using the same. Experience working with Agentic AI frameworks like Langgraph, ADK , Autogen etc. Hands-on experience in developing and deploying generative AI applications (text generation, conversational AI, image synthesis, etc.) Experience in MLOps and ML model deployment pipelines. Proficiency in programming languages like Python, and ML frameworks like TensorFlow, PyTorch, etc. Knowledge of cloud platforms (AWS, GCP, Azure) and tools for scalable ML solution deployment. Experience with data processing, feature engineering, and model training on large datasets. Familiarity with responsible AI practices, AI ethics, model governance and risk mitigation. Understanding of software engineering best practices and applying them to ML systems. Experience with an agile development environment. Exposure in tools/framework to monitor and analyse ML model performance and data accuracy. Strong problem-solving, analytical, and communication abilities. Bachelor’s or master’s degree in computer science, AI, Statistics, Math or related fields. Responsibilities Design, develop and optimize machine learning models for applications across different domains. Build natural language processing pipelines for tasks like text generation, summarization, translation, etc. Develop and deploy cutting-edge generative AI & Agentic AI applications. Implement MLOps practices - model training, evaluation, deployment, monitoring, and maintenance. Integrate machine learning capabilities into existing products or build new AI-powered applications. Perform data mining, cleaning, preparation, and augmentation for training robust ML models. Collaborate with cross-functional teams to translate AI requirements into technical implementations. Ensure model performance, scalability, reliability. Continuously research and implement state-of-the-art AI/ML algorithms and techniques. Manage the end-to-end ML lifecycle from data processing to production deployment.

Posted 3 days ago

Apply

5.0 years

0 Lacs

kochi, kerala, india

On-site

Minimum 5 years of experience in machine learning engineering and AI development Deep expertise in machine learning algorithms and techniques like supervised/unsupervised learning, deep learning, reinforcement learning, etc. Solid experience in natural language processing (NLP) - language models, text generation, sentiment analysis, etc. Proven understanding of generative AI concepts like text/image/audio synthesis, diffusion models, transformers, etc. Expertise in Agentic AI and building real world applications using the same. Experience working with Agentic AI frameworks like Langgraph, ADK, Autogen etc. Hands-on experience in developing and deploying generative AI applications (text generation, conversational AI, image synthesis, etc.) Experience in MLOps and ML model deployment pipelines. Proficiency in programming languages like Python, and ML frameworks like TensorFlow, PyTorch, etc. Knowledge of cloud platforms (AWS, GCP, Azure) and tools for scalable ML solution deployment. Experience with data processing, feature engineering, and model training on large datasets. Familiarity with responsible AI practices, AI ethics, model governance and risk mitigation. Understanding of software engineering best practices and applying them to ML systems. Experience with an agile development environment. Exposure in tools/framework to monitor and analyse ML model performance and data accuracy. Strong problem-solving, analytical, and communication abilities. Bachelor’s or master’s degree in computer science, AI, Statistics, Math or related fields. Responsibilities Design, develop and optimize machine learning models for applications across different domains. Build natural language processing pipelines for tasks like text generation, summarization, translation, etc. Develop and deploy cutting-edge generative AI & Agentic AI applications. Implement MLOps practices - model training, evaluation, deployment, monitoring, and maintenance. Integrate machine learning capabilities into existing products or build new AI-powered applications. Perform data mining, cleaning, preparation, and augmentation for training robust ML models. Collaborate with cross-functional teams to translate AI requirements into technical implementations. Ensure model performance, scalability, reliability. Continuously research and implement state-of-the-art AI/ML algorithms and techniques. Manage the end-to-end ML lifecycle from data processing to production deployment.

Posted 3 days ago

Apply

5.0 - 9.0 years

13 - 16 Lacs

chennai

Work from Office

Key Responsibilities: Develop and maintain scalable full-stack applications using Java, Spring Boot, and Angular for building rich UI screens and custom/reusable components. Design and implement cloud-based solutions leveraging Google Cloud Platform (GCP) services such as BigQuery, Google Cloud Storage, Cloud Run, and PubSub. Manage and optimize CI/CD pipelines using Tekton to ensure smooth and efficient development workflows. Deploy and manage Google Cloud services using Terraform, ensuring infrastructure as code principles. Mentor and guide junior software engineers, fostering professional development and promoting systemic change across the development team. Collaborate with cross-functional teams to design, build, and maintain efficient, reusable, and reliable code. Drive best practices and improvements in software engineering processes, including coding standards, testing, and deployment strategies. Required Skills: Java/Spring Boot (5+ years): In-depth experience in developing backend services and APIs using Java and Spring Boot. Angular (3+ years): Proven ability to build rich, dynamic user interfaces and custom/reusable components using Angular. Google Cloud Platform (2+ years): Hands-on experience with GCP services like BigQuery, Google Cloud Storage, Cloud Run, and PubSub. CI/CD Pipelines (2+ years): Experience with tools like Tekton for automating build and deployment processes. Terraform (1-2 years): Experience in deploying and managing GCP services using Terraform. J2EE (5+ years): Strong experience in Java Enterprise Edition for building large-scale applications. Experience mentoring and delivering organizational change within a software development team.

Posted 3 days ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

dhule

Work from Office

4+ years of experience in development of automation test cases preferable in a SaaS environment. Strong hands-on experience with Cypress, Selenium or RestAssured for automation frameworks. Superior Javascript skills required Experience working directly with JMeter, LoadRunner, Karate API, YAML, Postman Superior knowledge of agile best practices, continuous testing in a CI/CD environment such as GCP DevOps Experience with SCM tools like GitHub, JIRA, familiarity with SQL P&C Insurance industry experience required Bachelors Degree in Computer Science, Computer Engineering, or a related field

Posted 3 days ago

Apply

4.0 years

18 Lacs

kochi, kerala, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

greater bhopal area

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

5.0 years

0 Lacs

andhra pradesh, india

On-site

Design, implement, and manage cloud infrastructure on GCP, including Cloud Run, Pub/Sub, GCS, IAM, Redis, Load Balancing, and Vertex AI. Support and optimize Salesforce development workflows and tools like SFDX, Salesforce CLI, and Force.com Migration Tool. Manage containerized microservices using Kubernetes, Docker, and Cloud Run, integrated with Kafka, Spring Boot, and MuleSoft. Build and maintain CI/CD pipelines using GitHub Actions, Jenkins, Maven, SonarQube, Checkmarx, and JFrog Artifactory Manage source control and branching strategies using GitHub, ensuring code integrity, traceability, and collaboration across teams. Implement and manage API gateways to support secure and scalable service integrations. Automate infrastructure provisioning and configuration using IAC tooling such as Terraform. Implement monitoring systems health and performance using Splunk, Dynatrace, and Datadog. Ensure adherence to security best practices, including IAM, cloud, encryption, and secure networking. Collaborate with cross-functional teams to support agile development, testing, and deployment processes. Participate in incident response, root cause analysis, and continuous improvement initiatives. Learn new technologies and stay up to date with latest industry trends Required Skills & Experience 5+ years of experience in DevOps, cloud infrastructure, or platform engineering roles. Proficiency with source control, continuous integration, and testing pipelines using tools such as: GitHub, GitHub Actions, Jenkins, Maven, SonarQube, and Checkmarx. Strong hands-on experience with GCP and cloud-native architectures. Proficiency in Salesforce development and deployment (Apex, LWC) and tooling (SFDX, CLI). Experience with middleware platforms: Kafka, Spring Boot, MuleSoft. Solid understanding of container orchestration (Kubernetes, Docker) and CI/CD pipelines. Proficient in scripting languages: Python, Bash, Groovy. Familiarity with REST, JSON, and GraphQL APIs. Strong grasp of cloud security, IAM, and networking principles. Experience with monitoring and observability tools. Ability to thrive in fast-paced environments with frequent context switching and complex integration challenges. Excellent multitasking skills and the ability to manage several workstreams simultaneously. Proven ability to analyze complex requirements and provide innovative solutions Experience following Agile development practices with globally distributed teams Excellent communication skills. Able to communicate effectively with a range of stakeholders from management to other engineers, and present to both technical and non-technical audiences Solid organizational, time management and judgment skills Ability to work independently and in a team environment. Preferred Qualifications Certifications in GCP, AWS, or Azure. Experience with AI/ML services like Vertex AI, CCAI, or Dialog flow

Posted 3 days ago

Apply

4.0 years

18 Lacs

visakhapatnam, andhra pradesh, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

8.0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Lead the architecture, implementation, and administration of Red Hat Linux infrastructure and distributed cloud environments (AWS, Azure, GCP). Develop and execute infrastructure strategies aligning with business goals, security, and scalability. Manage a team of Linux and Cloud Engineers; provide mentorship and technical guidance. Oversee system performance, high availability, reliability, and capacity planning. Drive automation initiatives using tools such as Ansible, Terraform, Puppet, or Chef. Manage enterprise services: authentication, backup & recovery, monitoring, and security controls across hybrid environments. Ensure compliance with regulatory requirements and implement advanced security practices (IAM, VPC, firewalls, auditing). Collaborate with cross-functional teams on infrastructure projects, migrations, and cloud adoption. Conceive and deliver cloud migration strategies, cost optimization, and governance frameworks. Manage vendor relationships, budgets, and service delivery for infrastructure solutions. Foster a culture of collaboration, innovation, and adherence to IT best practices. Key Skills & Requirements Red Hat Certification (RHCE/RHCA mandatory). 8+ years of experience in Linux administration (enterprise-level), including at least 3 years in a management/supervisory role. Proven expertise in designing and operating cloud platforms (AWS, Azure, GCP) in enterprise settings. Deep understanding of Linux systems (RHEL/CentOS/Ubuntu), cloud architecture, infrastructure as code, and automation tooling. Demonstrated project management skills (Agile/DevOps methodologies is a plus). Strong proficiency in troubleshooting, security hardening, system optimization, and disaster recovery. Excellent communication, leadership, vendor management, and team development abilities. Educational Qualification Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. Red Hat Certification (RHCE/RHCA mandatory; additional cloud certifications preferred – AWS Solutions Architect, Azure Admin, etc.)

Posted 3 days ago

Apply

4.0 years

18 Lacs

indore, madhya pradesh, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

0 years

0 Lacs

trivandrum, kerala, india

On-site

Role Description Role Proficiency: Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day to day project execution. Outcomes Interpret the application feature and component designs to develop the same in accordance with specifications. Code debug test document and communicate product component and feature development stages. Validate results with user representatives integrating and commissions the overall solution. Select and create appropriate technical options for development such as reusing improving or reconfiguration of existing components while creating own solutions for new contexts Optimises efficiency cost and quality. Influence and improve customer satisfaction Influence and improve employee engagement within the project teams Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues Percent of voluntary attrition On time completion of mandatory compliance trainings Code Outputs Expected: Code as per the design Define coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation Requirements test cases and results Configure Define and govern configuration management plan Ensure compliance from the team Test Review/Create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise software developers on design and development of features and components with deeper understanding of the business problem being addressed for the client Learn more about the customer domain and identify opportunities to provide value addition to customers Complete relevant domain certifications Manage Project Support Project Manager with inputs for the projects Manage delivery of modules Manage complex user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort and size estimation and plan resources for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for applications features business components and data models Interface With Customer Clarify requirements and provide guidance to Development Team Present design options to customers Conduct product demos Work closely with customer architects for finalizing design Manage Team Set FAST goals and provide feedback Understand aspirations of the team members and provide guidance opportunities etc Ensure team members are upskilled Ensure team is engaged in project Proactively identify attrition risks and work with BSE on retention measures Certifications Obtain relevant domain and technology certifications Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort resources required for developing / debugging features / components Perform and evaluate test in the customer or target environments Make quick decisions on technical/project related challenges Manage a team mentor and handle people related issues in team Have the ability to maintain high motivation levels and positive dynamics within the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback for team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers and answer customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning handling multiple tasks. Build confidence with customers by meeting the deliverables timely with a quality product. Estimate time and effort of resources required for developing / debugging features / components Knowledge Examples Appropriate software programs / modules Functional & technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Broad knowledge of customer domain and deep knowledge of sub domain where problem is solved Additional Comments Highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in software development and data engineering.Certified engineers in GCP Data Engineering and Cloud Architecture. Key Responsibilities: Develop, optimize, and maintain scalable data pipelines and workflows. Design and implement solutions using cloud-based data warehouses such as GCP BigQuery, Snowflake, and Databricks. Collaborate with product and business teams to understand data requirements and deliver actionable insights. Work in an Agile environment, contributing to sprint planning, task prioritization, and iterative delivery. Write clean, maintainable, and efficient code . Knowledge in Python will be advantage. Ensure data security, integrity, and reliability across all systems. Required Qualifications: Strong background in data engineering, including ETL processes and data pipeline construction. Basics of RESTful APIs (GET, POST, PUT, DELETE methods). Knowledge of how APIs communicate and exchange data (JSON or XML formats). Skills in organizing and transforming data fetched from APIs using Python libraries like pandas and json. Understanding of how to avoid vulnerabilities when working with APIs, such as avoiding hardcoding sensitive information Using tools like Postman or Python libraries to test and debug API calls before integrating them into a system Familiarity with Python libraries for API interactions, such as requests, http.client, urllib, database connectors. Skills to handle API response errors and exceptions effectively Knowledge of common authentication methods like OAuth, API keys, or Basic Auth for securing API calls. Hands-on experience with cloud-based data warehouses like BigQuery, Snowflake, and Databricks. Proven ability to work in Agile frameworks and deliver high-quality results in fast-paced environments. Excellent communication skills to liaise effectively with product and business teams. Familiarity with data visualization tools is advantageous. Skills Sql Queries,Datawarehouse,Gcp,Bigquery

Posted 3 days ago

Apply

4.0 years

18 Lacs

chandigarh, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

mysore, karnataka, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

dehradun, uttarakhand, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

0 years

0 Lacs

bhopal, madhya pradesh, india

Remote

Freelance Node.js / Next.js Developer Location: Remote (Australia-based company) Engagement: Freelance / Contract About Us We are an Australian-based company building modern, scalable, and user-friendly web applications. Our mission is to deliver high-quality digital solutions that empower businesses and improve customer experiences. We are looking for a skilled Freelance Node.js / Next.js Developer with excellent communication skills to join our distributed team. Role & Responsibilities Develop, maintain, and optimize web applications using Node.js and Next.js . Collaborate closely with our Australian product team to translate business requirements into technical solutions. Ensure performance, scalability, and security best practices are implemented. Write clean, maintainable, and well-documented code. Participate in stand-ups, sprint planning, and client demos as required. Debug, troubleshoot, and resolve technical issues in a timely manner. Communicate effectively with stakeholders and team members in clear, professional English. Required Skills & Qualifications Strong hands-on experience with Node.js and Next.js . Proficiency in JavaScript (ES6+) and TypeScript (preferred). Solid understanding of REST APIs, GraphQL, and modern web architecture. Experience with front-end technologies: HTML5, CSS3, React ecosystem. Knowledge of server-side rendering (SSR) and static site generation (SSG) in Next.js. Familiarity with databases (SQL or NoSQL). Experience with Git and CI/CD pipelines. Excellent communication skills in English – both written and verbal. Ability to work independently, manage deadlines, and align with time zones in Australia. Nice-to-Have Skills Experience with cloud platforms (GCP, AWS, or Azure). Understanding of UI/UX best practices. Prior work with Australian or international clients. Familiarity with Agile/Scrum methodologies. What We Offer Flexible remote working arrangement. Competitive freelance compensation. Opportunity to work with a professional and collaborative Australian team. Exposure to international projects with real impact.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies