Jobs
Interviews

1617 Cloud Platforms Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

10 - 15 Lacs

Bengaluru, Karnataka, India

On-site

BASIC QUALIFICATIONS Experience collaborating with the cross-functional team to drive projects and solutions with customer satisfaction, exceed customer expectations, create and execute business plans to accelerate the adoption of Capillary Products. 7+ year s design/implementation/consulting experience of large-scale, enterprise applications 3+ years experience working with multi-national customers in support of technology and sales Demonstrated experience developing enterprise application architectures to meet business requirements in complex environments Experience with cloud solutions, virtual platforms, software development experience, and operational management practices and frameworks Understanding of security, risk and compliance frameworks, disaster recovery, high availability architectures, hardware, operating systems, and networking connectivity Large-scale systems integration involving public, private, and hybrid cloud platforms Technical degree required (Min BTech, BE - CSE / IT) PREFERRED QUALIFICATIONS Master s degree preferred Professional experience architecting/operating solutions built on SAAS product and cloud Platform Experience communicating across internal and external organizations ESSENTIAL FUNCTIONS OF THE JOB A Solution Architect provides architecture leadership subject matter expertise to client engagements focusing on complex innovative products and reusable assets Prior to kicking off a project as part of a product life cycle, the solution architect develops solution plans intended to support business investment decisions which means they must hold the appropriate balance between costs, risks, and quality of the product The solution architect creates innovative and practical designs that account for the end-to-end technical solution of a system, in line with the business strategy and objectives and within the context of the technical environment. For that, you shall be working closely and continuously with the business/client to focus on meeting business/client requirements and incorporating broader aspects such as overall product costs/revenue, data privacy sovereignty, business continuity, information security, integration with other systems, etc. You shall be key in identifying, defining, and implementing reusable assets and standards. The solution architect is also responsible for adherence to these standards and the consumption of reusable assets across products and portfolios You shall be ensuring relevant technical strategies, policies, standards, and practices are applied correctly across Technology programs/projects and products. You shall also contribute to the development of architecture governance structures, methodologies, and compliance activities You shall be working with vendors to assess vendor products, understand vendor s delivery models and assist in implementing them at Capillary A solution architect can work across multiple projects with varied stakeholders. You shall set architectural direction, build consensus, mediate conflicts providing technical leadership and advisory services to the business. You shall anticipate needs and potential objections and help to create an environment that solicits positive contributions from all participants: Solution and Technical Architects, engineering teams, product managers, project managers, product analysts, test and project teams, Information Security, and Operations. You shall have excellent interpersonal communication and organizational skills that are required to operate as a leading member of global, distributed teams that deliver quality services and solutions. You shall also cultivate lasting relationships across the business, IT, and vendors/industry analysts to maintain insight into the broader enterprise as well as industry trends. You shall be recognizing industry technology trends and emerging technologies understanding how they apply to Capillary and can drive their adoption into our organization. You shall be evangelizing and encouraging the importance of technical quality, emerging technologies, sharing experimentation across the org through mentoring, hackathons, communities, etc. You shall guide others in resolving complex issues in solution architecture and solves complex, escalated aspects of a project

Posted 2 weeks ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Hyderabad, Telangana, India

On-site

Teamware Solutions is seeking a versatile Full Stack Developer with expertise in ReactJS , Angular , and Spring Boot . You will be instrumental in designing, developing, and maintaining high-performance, scalable web applications, contributing to both cutting-edge front-end experiences and robust back-end systems. Key Responsibilities Develop dynamic and responsive user interfaces using modern front-end frameworks, specifically ReactJS and Angular (versions 2+) . Design and implement robust RESTful APIs and microservices using Spring Boot and Core Java . Integrate front-end and back-end components, including efficient database interactions with both SQL and NoSQL solutions. Optimize application performance, scalability, and maintainability across the full stack. Collaborate effectively with cross-functional teams throughout the entire software development lifecycle, ensuring quality and timely delivery. Write comprehensive unit and integration tests for both front-end and back-end components. Qualifications Proven experience as a Full Stack Developer with significant hands-on experience in ReactJS, Angular, and Spring Boot. Skills Required: Strong proficiency in ReactJS and its ecosystem (e.g., Hooks, Redux/Context API). Strong proficiency in Angular (version 2+ and higher) and TypeScript. Expertise in Core Java and the Spring Boot framework (e.g., Spring MVC, Spring Data, Spring Security). Solid understanding of RESTful API design and development and microservices architecture. Experience with relational databases (e.g., PostgreSQL, MySQL, Oracle) and/or NoSQL databases (e.g., MongoDB). Proficiency with version control systems (e.g., Git) . Excellent problem-solving, debugging, and analytical skills. Preferred Skills: Experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying full-stack applications. Familiarity with containerization technologies (Docker, Kubernetes) and CI/CD pipelines. Knowledge of other front-end build tools (e.g., Webpack) and testing frameworks. Understanding of Agile/Scrum methodologies.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 19 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

StoneBranch Job Scheduling, Monitoring & Migration: StoneBranch Job Scheduler ActiveBatch (for migration tasks) Scripting & Automation: Unix/Shell scripting General automation experience (scripts/programs) Cloud Platforms (Preferred): AWS Microsoft Azure Roles and Responsibilities Assist all customers and provide production support for all Applications and provide batch operation support Conversion of legacy batch schedules from Active Batch to StoneBranch Job scheduler.(Including Manual job conversions of specific Job Schedule Builds, Modifications and Operational Simulations) Ensure legacy schedules converted by the conversion tool are accurate before Test and Production Scheduling conversion Build requirements and scheduling flow of the Batch Cycle. Testing/implementation of systemic infrastructure updates/ product enhancements. Identify tool differences and provide solutions Directs the integration of new services into the organization and provides technical leadership with all projects Manage the CU core system, including all jobs and integrations to ensure the highest-level of uptime Drive end-to-end process redesign, performance improvement and automation through the identification and elimination of non-value-added activities. Acts as a Continuous Automation Improvement Subject Matter Expert (SME). Create scripts and programs to address given process challenges and situation to drive the most efficient solutions. Perform all tests on production applications and prepare recovery procedures for all applications and provide upgrade to same. Coordinate with IT groups and external vendors and ensure effective application services to ensure reliability of all applications. Analyze all business processes and ensure compliance to all controlled processes according to business requirement. Actively participates in investigating platform and perform end to end testing if needed. Develop and maintain professional relationships with all customers Monitor all alerts and escalate all issues for all procedures and systems. Coordinate with various teams and raise support ticket for all issues, analyze root cause and assist in efficient resolution of all production processes. Good knowledge of job scheduling tool Good understanding of support environments (ticketing, SLAs, rotation, etc.) Qualification: Bachelors degree in computer science or a related field Extensive experience with Job Monitoring tool Active Batch & Stone Branch Experience with cloud services (AWS, Microsoft Azure) a plus Experience with Unix/shell scripting or any scripting/automating experience a plus NOTE: Interested candidates can share their resume to: saikrishna.d@mytechglobal.in

Posted 2 weeks ago

Apply

3.0 - 6.0 years

22 - 27 Lacs

Pune

Work from Office

We are growing and seeking a skilled DevOps Engineer to join our devops engineering team. You'll be responsible for building and maintaining scalable cloud infrastructure across clouds and bare metal environments, automating deployment pipelines, and ensuring system reliability. What You’ll Do: Monitor and Optimize: Set up and maintain observability tools (logging, alerting, metrics) to detect and resolve performance bottlenecks. Implement Scalability Solutions: Create programmatic scaling and load balancing strategies to support usage growth. Develop Automation Systems: Write production-grade code for CI/CD pipelines, deployment automation, and infrastructure tooling to accelerate shipping. Migrate services to Kubernetes, improve performance and security of the clusters Improve Data and ML pipelines, work with EMR clusters What You’ll Need: Deep experience in infrastructure, DevOps, or platform engineering roles Deep expertise with cloud platforms (AWS preferred, GCP/Azure also welcome) and linux environment Experience with Terraform Proficiency with CI/CD systems and deployment automation (Jenkins, ArgoCD preferred) Experience with container orchestration using Kubernetes and Helm for application deployments Strong scripting capabilities in Python and Bash for automation and tooling Experience implementing secure systems at scale, including IAM and network security controls Familiarity with monitoring and observability stacks like Prometheus, Grafana, Loki Experience with configuration management tools - Ansible, Puppet, Chef Strong problem-solving skills with a bias toward resilience and scalability Excellent communication and collaboration across engineering teams Shift Timing: The regular hours for this position will cover a combination of business hours in the US and India – typically 2pm-11pm IST. Occasionally, later hours may be required for meetings with teams in other parts of the world. Additionally, for the first 4-6 weeks of onboarding and training, US Eastern time hours (IST -9:30) may be required. Benefits: Medical Insurance coverage is provided to our employees and their dependants, 100% covered by Comscore; Provident Fund is borne by Comscore, and is provided over and above the gross salary to employees; 26 Annual leave days per annum, divided into 8 Casual leave days and 18 Privilege leave days; Comscore also provides a paid “Recharge Week” over the Christmas and New Year period, so that you can start the new year fresh; In addition, you will be entitled to: 10 Public Holidays; 12 Sick leave days; 5 Paternity leave days; 1 Birthday leave day. Flexible work arrangements; “Summer Hours” are offered from March to May: Comscore offers employees the flexibility to work more hours from Monday to Thursday, and the hours can be offset on Friday from 2:00pm onwards; Employees are eligible to participate in Comscore’s Sodexo Meal scheme and enjoy tax benefits About Comscore: At Comscore, we’re pioneering the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners. Though our roles and skills are varied, we’re united by our commitment to five underlying values: Integrity, Velocity, Accountability, Teamwork, and Servant Leadership. If you’re motivated by big challenges and interested in helping some of the largest and most important media properties and brands navigate the future of media, we’d love to hear from you. Comscore (NASDAQ: SCOR) is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, Comscore allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence. A proven leader in measuring digital and set-top box audiences and advertising at scale, Comscore is the industry’s emerging, third-party source for reliable and comprehensive cross-platform measurement. To learn more about Comscore, please visit Comscore.com. Comscore is committed to creating an inclusive culture, encouraging diversity. About Comscore: At Comscore, we’re pioneering the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners. Though our roles and skills are varied, we’re united by our commitment to five underlying values: Integrity, Velocity, Accountability, Teamwork, and Servant Leadership. If you’re motivated by big challenges and interested in helping some of the largest and most important media properties and brands navigate the future of media, we’d love to hear from you. Comscore (NASDAQ: SCOR) is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, Comscore allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence. A proven leader in measuring digital and set-top box audiences and advertising at scale, Comscore is the industry’s emerging, third-party source for reliable and comprehensive cross-platform measurement. To learn more about Comscore, please visit Comscore.com. C omscore is committed to creating an inclusive culture, encouraging diversity. *LI-JL1

Posted 2 weeks ago

Apply

8.0 - 11.0 years

20 - 27 Lacs

Pune

Hybrid

Sr Specialist Software Engineer What’s the role all about? You will be a key contributor to developing a multi-region, multi-tenant SaaS product. You will collaborate with the core R&D team, using technologies like React, .NET/C# and AWS to build scalable, high-performance products within a cloud-first, microservices-driven environment. How will you make an impact? Take ownership of the software development lifecycle, including design, development, unit testing, and deployment, working closely with QA teams. Ensure that architectural concepts are consistently implemented across the product. Act as a product expert within R&D, understanding the product’s requirements and its market positioning. Work closely with cross-functional teams to ensure successful product delivery. Key Responsibilities: Lead the design and implementation of software features in alignment with product specifications and adhere to High-Level Design (HLD) and Low-Level Design (LLD) standards. Lead the development of scalable, multi-tenant SaaS solutions. Collaborate with Product Management, R&D, UX, and DevOps teams to deliver seamless, end-to-end solutions. Advocate for and implement Continuous Integration and Delivery (CI/CD) practices to improve development efficiency and product quality. Mentor junior engineers, share knowledge, and promote best practices within the team. Assist in solving complex technical problems and enhance product functionality through innovative solutions. Conduct code reviews to ensure adherence to design principles and maintain high-quality standards. Plan and execute unit testing to verify functionality and ensure automation coverage. Contribute to the ongoing support of software features, ensuring complete quality coverage and responsiveness to any issues during the software lifecycle. Qualifications & Experience: Bachelor’s or Master’s degree in Computer Science, Electronics Engineering, or a related field from a reputed institute. More than 11 years of experience in software development with a strong focus on backend technologies and a track record of delivering complex projects. Expertise in React, JavaScript, Typescript for front-end development. Experience working with public cloud platforms like AWS (mandatory). Hands-on experience with Continuous Integration and Delivery (CI/CD) practices using tools like Docker, Kubernetes, and other modern pipelines. Experience in .Net is good to have but NOT mandatory Experience in developing high-performance, highly available, and scalable systems. Working knowledge of RESTful APIs Solid understanding of scalable and microservices architectures, performance optimization, and secure coding practices. Exceptional problem-solving skills and the ability to work on multiple concurrent projects. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere.

Posted 2 weeks ago

Apply

4.0 - 5.0 years

3 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary: The Snowflake Data Engineer is responsible for building and managing data pipelines and data warehousing solutions using the Snowflake platform. This role involves working with large datasets, ensuring data quality, and enabling scalable data integration and analytics across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Snowflake Build and optimize data models, schemas, and data warehouses for performance and efficiency Perform data extraction, transformation, and loading (ETL/ELT) from various sources Integrate Snowflake with external tools, data sources, and cloud platforms Collaborate with data analysts, architects, and business teams to define data requirements Ensure data quality, integrity, and security across all data processes Monitor data pipelines and troubleshoot performance or data issues Automate workflows and optimize queries for cost and speed Maintain documentation for data structures, processes, and governance policies Required Skills and Qualifications: Bachelor's degree in Computer Science, Data Engineering, or related field 3+ years of experience in data engineering with at least 1+ year on Snowflake Proficient in SQL and experience with Snowflake architecture and features Hands-on experience with ETL/ELT tools like Informatica, Matillion, dbt, or similar Experience in working with cloud platforms like AWS, Azure, or GCP Strong understanding of data modeling, data warehousing, and performance tuning Good communication, problem-solving, and documentation skills Preferred Qualifications: Snowflake SnowPro Certification Experience with scripting languages like Python for data processing Familiarity with DevOps tools for CI/CD pipelines in data engineering Knowledge of data security, governance, and compliance standards Experience working in Agile or Scrum environments

Posted 2 weeks ago

Apply

2.0 - 4.0 years

2 - 4 Lacs

Bengaluru, Karnataka, India

On-site

Teamware Solutions is seeking a highly skilled and experienced Senior Automation Engineer with deep expertise in Ansible and Python. This pivotal role involves leading the design, development, implementation, and troubleshooting of advanced automation solutions across our infrastructure and software delivery pipelines. The successful candidate will ensure smooth operations, enhance efficiency, and contribute significantly to our strategic business objectives by driving automation initiatives. Roles and Responsibilities: Automation Strategy & Design: Lead the analysis of existing processes and infrastructure to identify automation opportunities. Design and architect robust, scalable, and reusable automation frameworks and solutions primarily using Ansible and Python. Development & Implementation: Develop, test, and maintain complex automation scripts, Ansible playbooks, roles, and custom modules to automate infrastructure provisioning, configuration management, application deployments, and operational tasks. CI/CD Integration: Integrate automation solutions seamlessly into Continuous Integration/Continuous Delivery (CI/CD) pipelines, enabling faster, more reliable, and consistent software releases. Code Review & Mentorship: Conduct thorough code reviews for automation scripts and playbooks. Provide technical guidance and mentorship to junior automation engineers, fostering a culture of best practices and continuous learning. Troubleshooting & Optimization: Perform advanced troubleshooting of automation failures and identify root causes. Continuously optimize existing automation scripts and processes for performance, reliability, and maintainability. Infrastructure as Code (IaC): Drive the adoption of Infrastructure as Code principles, utilizing Ansible and other relevant tools to manage infrastructure configurations. Cross-functional Collaboration: Collaborate closely with DevOps teams, SREs, software development teams, and operations personnel to understand their automation needs and deliver impactful solutions. Documentation: Create and maintain comprehensive technical documentation for all automation processes, tools, and solutions. Technology Exploration: Stay abreast of emerging automation technologies and industry trends, evaluating their potential application to enhance our automation landscape. Preferred Candidate Profile: Automation Expertise: Proven ability to lead and deliver complex automation projects from inception to production. Ansible Proficiency: Expert-level knowledge and hands-on experience with Ansible, including advanced playbook writing, custom module development, and Ansible Tower/AWX. Python Mastery: Strong programming skills in Python for developing automation scripts, tools, and integrations. Operating Systems: Solid understanding and experience with Linux/Unix system administration. Familiarity with Windows automation is a plus. CI/CD & DevOps: In-depth understanding of CI/CD principles and experience integrating automation with CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps). Version Control: Proficient with version control systems, especially Git, and collaborative development workflows. Cloud Platforms (Plus): Experience automating tasks and managing infrastructure on cloud platforms (e.g., AWS, Azure, GCP). Analytical & Problem-Solving: Exceptional analytical and problem-solving skills, with a keen ability to diagnose complex issues in automated environments. Communication & Leadership: Excellent verbal and written communication skills, with demonstrated ability to lead discussions, influence decisions, and mentor team members. Certifications (Plus): Relevant industry certifications (e.g., Red Hat Certified Specialist in Ansible Automation) are highly desirable.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Delhi, India

On-site

Teamware Solutions is seeking a talented and innovative AI / Machine Learning Engineer to join our growing team. This pivotal role involves working with cutting-edge artificial intelligence and machine learning technologies, ensuring smooth operations, and directly contributing to our business objectives by leveraging AI to solve complex problems and drive intelligent solutions within the Artificial Intelligence (AI) domain. Roles and Responsibilities: Analysis & Data Preparation: Conduct in-depth analysis of business problems to identify AI/ML opportunities. Perform data collection, cleansing, feature engineering, and preprocessing of large datasets to prepare them for model training. Model Development & Training: Design, develop, train, and evaluate various machine learning models (e.g., supervised, unsupervised, deep learning, NLP, computer vision) using relevant algorithms and frameworks. Implementation & Deployment: Implement AI/ML algorithms and integrate trained models into production systems and applications. Work on deployment pipelines to ensure models are scalable and performant in real-world environments. Troubleshooting & Optimization: Monitor the performance of deployed AI/ML models, troubleshoot issues, and continuously optimize models for accuracy, efficiency, and resource utilization. Research & Innovation: Stay abreast of the latest advancements, research papers, and industry trends in Artificial Intelligence and Machine Learning. Experiment with new technologies and methodologies to bring innovative solutions to the business. Collaboration: Work closely with data scientists, software engineers, product managers, and other stakeholders to understand requirements, define project scope, and deliver impactful AI-driven solutions. Documentation: Create clear and comprehensive documentation for AI models, development processes, and deployment strategies. Preferred Candidate Profile: Technical Proficiency: Strong programming skills in languages commonly used in AI/ML (e.g., Python, R, Java, Scala). Proficiency with key machine learning libraries and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, Keras). Solid understanding of machine learning algorithms, statistical modeling, and data structures. Experience with data manipulation and analysis tools (e.g., Pandas, NumPy). Familiarity with database concepts and querying (SQL/NoSQL). Cloud Platforms (Plus): Experience with cloud-based AI/ML services and platforms (e.g., AWS SageMaker, Azure ML, Google Cloud AI Platform) is a significant advantage. MLOps (Plus): Understanding of MLOps principles, including model versioning, deployment, monitoring, and retraining strategies. Problem-Solving: Exceptional analytical and problem-solving skills with the ability to break down complex problems and devise effective AI/ML solutions. Communication: Strong verbal and written communication skills to articulate complex technical concepts to both technical and non-technical audiences.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Delhi, India

On-site

Teamware Solutions is actively seeking a skilled and proactive Snowflake Data Engineer with 2-5 years of experience to contribute to our robust data initiatives. This role is pivotal for working with cutting-edge cloud data warehousing technologies, ensuring seamless data operations, and transforming raw data into actionable insights to meet diverse business objectives. The successful candidate will be instrumental in designing, developing, implementing, and troubleshooting solutions within the Snowflake environment. Roles and Responsibilities: Data Warehouse Development: Design, develop, and maintain scalable data warehousing solutions on the Snowflake platform, focusing on optimal data architecture and performance. ETL/ELT Pipeline Implementation: Build and optimize efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) data pipelines to ingest, process, and load large volumes of structured and semi-structured data into Snowflake from various sources. SQL Development & Optimization: Write complex SQL queries for data extraction, transformation, analysis, and reporting within Snowflake. Optimize existing queries and data structures for improved performance and cost efficiency. Snowflake Features Utilization: Work extensively with Snowflake's core features such as virtual warehouses, stages, Snowpipe, Streams, Tasks, Time Travel, and Zero Copy Cloning to maximize platform capabilities. Data Modeling: Contribute to data modeling and schema design (e.g., Star Schema, Snowflake Schema, normalized/denormalized structures) within Snowflake to support analytical and reporting needs. Troubleshooting & Performance Tuning: Identify, troubleshoot, and resolve data-related issues, pipeline failures, and performance bottlenecks within the Snowflake environment. Collaboration: Collaborate closely with data scientists, data analysts, and other engineering teams to understand data requirements and deliver robust data solutions. Data Governance & Security: Assist in implementing data governance best practices, including access controls, roles, and security measures within Snowflake to ensure data integrity and compliance. Documentation: Create and maintain technical documentation for data pipelines, data models, and Snowflake solutions. Preferred Candidate Profile: Experience: 2 to 5 years of hands-on professional experience as a Snowflake Developer or Data Engineer with a strong focus on Snowflake. Snowflake Expertise: In-depth understanding of Snowflake's cloud data platform architecture and key features. SQL Proficiency: Strong proficiency in SQL with the ability to write complex queries, stored procedures, and optimize query performance. Data Warehousing Concepts: Solid understanding of data warehousing principles, ETL/ELT processes, and data modeling concepts. Programming Skills (Plus): Experience with programming languages such as Python or Scala for data processing and automation is a plus. Cloud Exposure (Plus): Familiarity with cloud platforms (AWS, Azure, GCP) and their data services, particularly in relation to data ingestion to Snowflake. Tools: Experience with data integration tools (e.g., Fivetran, Matillion) or orchestration tools (e.g., Apache Airflow) is an advantage. Analytical Skills: Strong analytical and problem-solving abilities with a keen attention to detail. Communication: Excellent verbal and written communication skills, with the ability to articulate technical concepts clearly. Education: Bachelor's degree in Computer Science, Data Engineering, or a related technical field.

Posted 2 weeks ago

Apply

12.0 - 20.0 years

0 Lacs

mysore, karnataka

On-site

The Group Product Manager will lead the strategic development and enhancement of our proprietary business intelligence platform, iSOCRATES MADTechAI, as well as other innovative products. This role demands a deep understanding of technology, strong analytical skills, and a collaborative mindset to evaluate product potential, oversee the product lifecycle, and ensure alignment with both client-partner and internal needs. Lead the strategic vision and execution of iSOCRATES MADTechAI, focusing on feature enhancements and user experience improvements. Conduct market research to identify customer needs within the AdTech, MarTech, and DataTech landscapes, translating them into actionable product requirements. Prioritize product features based on business impact, customer feedback, and technical feasibility. Oversee the entire product development lifecycle, including conception, design, development, testing, and launch phases. Utilize Agile methodologies (SCRUM, Kanban) to facilitate iterative development and continuous improvement. Manage roadmaps, timelines, and deliverables using tools like Jira, ensuring projects are on track and risks are mitigated. SaaS Development: Deep understanding of SaaS architecture, deployment, and lifecycle management. Cloud Platforms: Proficiency with cloud platforms (AWS required; Google Cloud and Azure preferred). AI and Machine Learning: Extensive experience with AI/ML concepts, tools, and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and their application in product development. Data Engineering: Strong knowledge of data engineering principles, including ETL processes, data pipelines, and data modeling to ensure data integrity and availability for analytics. Data Analytics: Strong knowledge of data analytics, data warehousing, and business intelligence tools (e.g., SQL, Tableau, PowerBI, Sisense). Natural Language Processing (NLP): Familiarity with NLP techniques and applications in product features to enhance user engagement and insights. Microservices Architecture: Experience designing and implementing microservices architectures to enhance product scalability and maintainability. ReactJS Technologies: Proficiency in ReactJS and related frameworks to ensure seamless front-end development and integration with back-end services. Collaborate with engineering teams to define system architecture and design concepts that align with best practices in UX/UI. Ensure the integration of various technologies, including APIs, AngularJS, Node.js, ReactJS, and MVC architecture into product offerings. Strong hands-on experience in Product-Led Growth (PLG) strategies and Partner/Channel go-to-market approaches. Partner closely with the U.S. and India-based Partner Success teams to support pre-sales activities and customer engagement, acting as a subject matter expert in AdTech, MarTech, and DataTech. Facilitate communication between product, engineering, marketing, and sales teams to ensure cohesive product strategy and execution. Engage with external customers to gather feedback and drive product iterations. Design and implement client data analysis methodologies, focusing on data-driven decision-making processes relevant to AdTech, MarTech, and DataTech. Develop analytics frameworks that leverage data science principles and advanced statistical methods to derive actionable insights for clients. Monitor product performance metrics and develop KPIs to assess impact and identify areas for improvement, leveraging A/B testing and experimentation techniques. Establish and refine processes for product management, ensuring repeatability and scalability. Lead initiatives to enhance existing workflows, focusing on efficiency and effectiveness in product delivery. Create and present progress reports, updates, and presentations to senior management and stakeholders. Bachelors or Masters degree in Computer Science, Data Science, or a related quantitative field. MBA or specialized training in product management or data science is preferred. 12 to 20 years of experience in technology product engineering and development, with a minimum of 10 years in product management. Proven track record in managing complex products, especially in business intelligence or marketing technology domains. Strong proficiency in BI platforms (e.g., Sisense, Tableau, PowerBI, Looker, DOMO) and data visualization tools. Deep understanding of cloud platforms (AWS, Snowflake) and experience with database query languages (SQL, NoSQL). Expertise in API development and management, along with knowledge of front-end technologies (AngularJS, ReactJS, Bootstrap). In-depth knowledge of AI and NLP technologies, with experience in applying them to enhance product functionality. Strong background in data engineering, including ETL processes, data warehousing, and data pipeline management. Must have a strong understanding of digital advertising, including AdTech, MarTech, and DataTech technologies. Experience in B2C and B2B SaaS product development, particularly in customer journey mapping and email marketing. Strong analytical and problem-solving abilities, with a focus on data-driven outcomes. Excellent communication and presentation skills, capable of articulating complex ideas to diverse audiences. Collaborative and open-minded, fostering a culture of innovation and accountability. High energy and enthusiasm for driving product success in a fast-paced environment. Have extensive experience with Atlassian products including JIRA and Confluence. Have extensive experience with Product Management and Monitoring Software. Must be ready to relocate to Mysuru or Bengaluru.,

Posted 3 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

thane, maharashtra

On-site

About SimplyFI: SimplyFI is a fast-growing fintech company that specializes in building blockchain and AI-powered digital solutions to simplify complex trade finance and supply chain operations. The company's mission is to digitize, automate, and secure financial workflows for global enterprises. Job Summary: We are seeking a skilled Python Developer with a passion for technology and innovation to join our engineering team. The ideal candidate should have at least 12 years of hands-on experience in Python development and a solid understanding of back-end development, message queues (specifically RabbitMQ), and SQL databases. Your primary responsibility will be to contribute to the development of scalable and high-performance applications that address real-world challenges in the fields of fintech and trade automation. Key Responsibilities: - Develop and maintain backend services and APIs using Python. - Utilize RabbitMQ for implementing asynchronous message queues. - Write efficient SQL queries and optimize database interactions. - Collaborate with frontend developers, DevOps, and product teams for seamless integration. - Create clean, modular, and testable code. - Participate in code reviews and contribute to process improvements. - Troubleshoot, debug, and enhance existing systems. Required Skills: - Proficiency in Python, with experience in Flask or FastAPI. - Hands-on experience with RabbitMQ or similar message queuing systems. - Strong understanding of SQL and experience with relational databases like PostgreSQL and MySQL. - Familiarity with RESTful API design and integration. - Knowledge of Git version control. - Understanding of software design patterns and best practices. Preferred Skills (Good to Have): - Experience with SQL databases such as MongoDB. - Exposure to containerization tools like Docker. - Basic knowledge of microservices architecture. - Familiarity with cloud platforms such as AWS, Azure, or GCP.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

telangana

On-site

As the Vice President of Engineering at Teradata in India, you will be responsible for leading the software development organization for the AI Platform Group. This includes overseeing the execution of the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases. Your success in this role will be measured by your ability to build a world-class engineering culture, attract and retain technical talent, accelerate product delivery, and drive innovation that brings tangible value to customers. In this role, you will lead a team of over 150 engineers with a focus on helping customers achieve outcomes with Data and AI. Collaboration with key functions such as Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be essential to your success. You will also lead a regional team of up to 500 individuals, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. Collaboration with various stakeholders at regional and global levels will be a key aspect of your role. To be considered a qualified candidate for this position, you should have at least 10 years of senior leadership experience in product development or engineering within enterprise software product companies. Additionally, you should have a minimum of 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. You must have a proven track record of leading agentic AI development and scaling AI in a hybrid cloud environment, as well as experience with Agile and DevSecOps methodologies. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, modern data stack technologies, AI/ML infrastructure, enterprise security, and performance engineering is also crucial. A passion for open-source collaboration, building high-performing engineering cultures, and inclusive leadership is highly valued. Ideally, you should hold a Master's degree in engineering, Computer Science, or an MBA. At Teradata, we prioritize a people-first culture, offer a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our mission to empower our customers and drive innovation in the world of AI and data analytics.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

You have around 10 years of experience in AI/ML, Generative AI, and automation domains. As a highly skilled and visionary Solution Architect, your role will involve leading the design and implementation of cutting-edge solutions in Artificial Intelligence (AI), Generative AI, and Automation. You will need a combination of technical expertise, strategic thinking, and leadership to drive innovation and deliver scalable, high-impact solutions across the organization. Your key responsibilities will include designing end-to-end AI and automation solutions that are aligned with business goals and technical requirements. You will participate in business requirements discussions to arrive at solutions and technical architecture that consider all aspects of the business problem. It will be your responsibility to define architecture blueprints, integration patterns, and data pipelines for AI-driven systems. Additionally, you will evaluate and select appropriate AI models, automation tools (e.g., RPA, BPM), and cloud platforms to ensure scalability, security, and performance of deployed solutions. Staying current with emerging technologies and industry trends in AI, ML, NLP, computer vision, and automation will also be crucial. Providing technical leadership and mentorship to engineering teams, as well as participating in pre-sales and client engagements to define solution strategies and roadmaps, will be part of your role. The required qualifications for this position include a Bachelor's or Master's degree in computer science, Engineering, or a related field. You should have proven experience in architecting and deploying Generative AI solutions in production environments in Azure, GCP, AWS, etc. Strong knowledge of AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, experience with automation platforms like UiPath, Automation Anywhere, Power Automate, proficiency in cloud platforms (AWS, Azure, or GCP) and containerization (Docker, Kubernetes), solid understanding of data engineering, APIs, and microservices architecture, as well as excellent communication and stakeholder management skills. Preferred qualifications for this position include certifications in cloud architecture or AI/ML (e.g., AWS Certified Machine Learning, Azure AI Engineer), experience with MLOps, model monitoring, and CI/CD pipelines, as well as familiarity with ethical AI practices and data privacy regulations.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

The Implementation Technical Architect role focuses on designing, developing, and deploying cutting-edge Generative AI (GenAI) solutions using the latest Large Language Models (LLMs) and frameworks. Your responsibilities include creating scalable and modular architecture for GenAI applications, leading Python development for GenAI applications, building tools for automated data curation, integrating solutions with cloud platforms like Azure, GCP, and AWS, applying advanced fine-tuning techniques to optimize LLM performance, establishing LLMOps pipelines, ensuring ethical AI practices, implementing Reinforcement Learning with Human Feedback and Retrieval-Augmented Generation techniques, collaborating with front-end developers, and more. Key Responsibilities: - Design and Architecture: Create scalable and modular architecture for GenAI applications using frameworks like Autogen, Crew.ai, LangGraph, LlamaIndex, and LangChain. - Python Development: Lead the development of Python-based GenAI applications, ensuring high-quality, maintainable, and efficient code. - Data Curation Automation: Build tools and pipelines for automated data curation, preprocessing, and augmentation to support LLM training and fine-tuning. - Cloud Integration: Design and implement solutions leveraging Azure, GCP, and AWS LLM ecosystems, ensuring seamless integration with existing cloud infrastructure. - Fine-Tuning Expertise: Apply advanced fine-tuning techniques such as PEFT, QLoRA, and LoRA to optimize LLM performance for specific use cases. - LLMOps Implementation: Establish and manage LLMOps pipelines for continuous integration, deployment, and monitoring of LLM-based applications. - Responsible AI: Ensure ethical AI practices by implementing Responsible AI principles, including fairness, transparency, and accountability. - RLHF and RAG: Implement Reinforcement Learning with Human Feedback (RLHF) and Retrieval-Augmented Generation (RAG) techniques to enhance model performance. - Modular RAG Design: Develop and optimize Modular RAG architectures for complex GenAI applications. - Open Source Collaboration: Leverage Hugging Face and other open-source platforms for model development, fine-tuning, and deployment. - Front-End Integration: Collaborate with front-end developers to integrate GenAI capabilities into user-friendly interfaces. Required Skills: - Python Programming: Deep expertise in Python for building GenAI applications and automation tools. - LLM Frameworks: Proficiency in frameworks like Autogen, Crew.ai, LangGraph, LlamaIndex, and LangChain. - Large-Scale Data Handling & Architecture: Design and implement architectures for handling large-scale structured and unstructured data. - Multi-Modal LLM Applications: Familiarity with text chat completion, vision, and speech models. - Fine-tune SLM(Small Language Model) for domain specific data and use cases. - Prompt injection fallback and RCE tools such as Pyrit and HAX toolkit etc. - Anti-hallucination and anti-gibberish tools such as Bleu etc. - Cloud Platforms: Extensive experience with Azure, GCP, and AWS LLM ecosystems and APIs. - Fine-Tuning Techniques: Mastery of PEFT, QLoRA, LoRA, and other fine-tuning methods. - LLMOps: Strong knowledge of LLMOps practices for model deployment, monitoring, and management. - Responsible AI: Expertise in implementing ethical AI practices and ensuring compliance with regulations. - RLHF and RAG: Advanced skills in Reinforcement Learning with Human Feedback and Retrieval-Augmented Generation. - Modular RAG: Deep understanding of Modular RAG architectures and their implementation. - Hugging Face: Proficiency in using Hugging Face and similar open-source platforms for model development. - Front-End Integration: Knowledge of front-end technologies to enable seamless integration of GenAI capabilities. - SDLC and DevSecOps: Strong understanding of secure software development lifecycle and DevSecOps practices for LLMs.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will lead the development of high-performance backend services using Java and Spring Boot, designing and building reliable and scalable REST APIs and microservices. Taking ownership of features and system components throughout the software lifecycle will be your responsibility. You will also design and implement CI/CD workflows using tools like Jenkins or GitHub Actions, contributing to architectural decisions, code reviews, and system optimizations. Your expertise in Java and advanced experience with the Spring Boot framework will be essential, along with proven experience in building and scaling REST APIs and microservices. Hands-on experience with CI/CD automation and DevOps tools is required, as well as working knowledge of distributed systems, cloud platforms, and Kafka. A strong understanding of system design, performance optimization, and best coding practices is crucial for this role. Proficiency in Docker and Kubernetes for containerized deployments, exposure to NoSQL databases such as MongoDB and Cassandra, and experience with configuration server management and dynamic config updates are nice-to-have skills. Familiarity with monitoring and logging tools like Prometheus, ELK Stack, or others, along with awareness of cloud security standards, observability, and incident management will be beneficial. This is a full-time position with benefits including Provident Fund. The work schedule is during the day shift, and the job requires at least 5 years of experience in Java Developer, Docker and Kubernetes, NoSQL databases MongoDB, Cassandra, Kafka, Spring Boot framework, Jenkins, GitHub, REST APIs, system design, cloud architectures and microservices, monitoring and logging tools, and awareness of cloud security. Work location is in person.,

Posted 3 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You have 15+ years of Relevant Experience as a JAVA Architect, and we are looking for a highly skilled and experienced candidate to join our team. As a JAVA Architect, you will lead the architecture and design efforts for enterprise-level applications using JAVA, Spring Boot, and microservices architecture. You will collaborate with cross-functional teams, develop technical documentation, and mentor development teams on best practices and design patterns. Additionally, you will evaluate new technologies, troubleshoot technical issues, and ensure alignment of architectural solutions with business objectives. Key Responsibilities: - Design and develop high-quality software solutions using JAVA, Spring Boot, and microservices architecture. - Lead the architecture and design efforts for enterprise-level applications, ensuring scalability, performance, and security. - Collaborate with cross-functional teams to define technical requirements and create comprehensive architectural solutions. - Develop and maintain technical documentation, including architectural diagrams, design patterns, and coding standards. - Mentor and guide development teams on best practices, coding standards, and design patterns. - Evaluate and recommend new technologies, tools, and frameworks to enhance the development process and improve system performance. - Ensure alignment of architectural solutions with business objectives and technical requirements. - Troubleshoot and resolve complex technical issues, providing technical leadership and guidance. Mandatory Skills: - Strong expertise in JAVA programming and related technologies. - Proficiency in Spring Boot for building robust and scalable applications. - In-depth knowledge of microservices architecture and best practices. - Experience with Maven for project management and build automation. - Strong understanding of coding and design patterns. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills. - Ability to lead and mentor development teams. Preferred Qualifications: - Experience with cloud platforms such as AWS, Azure, or Google Cloud. - Familiarity with containerization technologies like Docker and Kubernetes. - Knowledge of DevOps practices and tools. - Experience with database design and management. - Certification in Java or related technologies. Education and Experience: - Bachelors or masters degree in computer science, Information Technology, or a related field. - Minimum of 10+ years of experience in software development, architecture, and design.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for designing and implementing scalable Snowflake data warehouse architectures, which includes schema modeling and data partitioning. You will lead or support data migration projects from on-premise or legacy cloud platforms to Snowflake. Additionally, you will be developing ETL/ELT pipelines and integrating data using tools such as DBT, Fivetran, Informatica, Airflow, etc. It will be part of your role to define and implement best practices for data modeling, query optimization, and storage efficiency in Snowflake. Collaboration with cross-functional teams, including data engineers, analysts, BI developers, and stakeholders, to align architectural solutions will be essential. Ensuring data governance, compliance, and security by implementing RBAC, masking policies, and access control within Snowflake will also be a key responsibility. Working with DevOps teams to enable CI/CD pipelines, monitoring, and infrastructure as code for Snowflake environments will be part of your duties. Optimizing resource utilization, monitoring workloads, and managing the cost-effectiveness of the platform will also be under your purview. Staying updated with Snowflake features, cloud vendor offerings, and best practices is crucial. Qualifications & Skills: - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - X years of experience in data engineering, data warehousing, or analytics architecture. - 3+ years of hands-on experience in Snowflake architecture, development, and administration. - Strong knowledge of cloud platforms (AWS, Azure, or GCP). - Solid understanding of SQL, data modeling, and data transformation principles. - Experience with ETL/ELT tools, orchestration frameworks, and data integration. - Familiarity with data privacy regulations (GDPR, HIPAA, etc.) and compliance. Qualifications: - Snowflake certification (SnowPro Core / Advanced). - Experience in building data lakes, data mesh architectures, or streaming data platforms. - Familiarity with tools like Power BI, Tableau, or Looker for downstream analytics. - Experience with Agile delivery models and CI/CD workflows.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The opportunity: Hitachi Energy is seeking a highly motivated and skilled Business Analyst to support and drive the successful delivery of AI initiatives across the organization. This role will focus on identifying business opportunities, gathering and analyzing requirements, and collaborating with cross-functional teams to implement AI solutions using a variety of technologies and platforms. The ideal candidate will have a strong understanding of Business Requirements gathering concepts, excellent analytical skills, and experience working in complex industrial or energy environments. How you'll make an impact: Collaborate with business units to identify and prioritize AI use cases aligned with strategic goals. Conduct stakeholder interviews, workshops, and process analysis to gather detailed business requirements. Translate business needs into functional and technical specifications for AI solutions. Perform cost-benefit and impact analysis for proposed AI initiatives. Define and track KPIs to measure the success of AI implementations. Work closely with data scientists, AI engineers, and IT teams to ensure business requirements are accurately implemented. Support the development and deployment of AI Solutions using platforms such as Microsoft Azure AI. Assist in data preparation, validation, and governance activities. Ensure ethical AI practices and compliance with data privacy regulations. Prepare and present business cases, project updates, and post-implementation reviews liaising with the different vendor teams. Facilitate change management and user adoption of AI solutions. Maintain comprehensive documentation including business requirements, process flows, user stories, change requests, etc. Identify opportunities for process automation and optimization using AI technologies. Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy's core values of safety and integrity, which means taking responsibility for your actions while caring for your colleagues and the business. Your background: Bachelor's or Master's degree in Business, Engineering, Computer Science, or related field. Minimum 8 years of overall experience. 4+ years of experience as a Business Analyst, preferably in the energy or industrial sector. 2+ years of experience working on AI/ML projects. Strong understanding of AI/ML concepts, data lifecycle, and cloud platforms. Familiarity with tools such as Power BI, Azure DevOps, JIRA, Confluence. Experience with AI applications and solutions (e.g., Gen AI based Chatbots, RAG architecture, etc.). Certifications in Business Analysis (CBAP, PMI-PBA) or AI platforms (Azure AI Engineer, AWS Machine Learning). Proficiency in both spoken & written English language is required.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

At PwC, our data and analytics team focuses on utilizing data to drive insights and support informed business decisions. We leverage advanced analytics techniques to assist clients in optimizing their operations and achieving strategic goals. As a data analysis professional at PwC, your role will involve utilizing advanced analytical methods to extract insights from large datasets, enabling data-driven decision-making. Your expertise in data manipulation, visualization, and statistical modeling will be pivotal in helping clients solve complex business challenges. PwC US - Acceleration Center is currently seeking a highly skilled MLOps/LLMOps Engineer to play a critical role in deploying, scaling, and maintaining Generative AI models. This position requires close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure the seamless integration and operation of GenAI models within production environments at PwC and for our clients. The ideal candidate will possess a strong background in MLOps practices and a keen interest in Generative AI technologies. With a preference for candidates with 4+ years of hands-on experience, core qualifications for this role include: - 3+ years of experience developing and deploying AI models in production environments, alongside 1 year of working on proofs of concept and prototypes. - Proficiency in software development, including building and maintaining scalable, distributed systems. - Strong programming skills in languages such as Python and familiarity with ML frameworks like TensorFlow and PyTorch. - Knowledge of containerization and orchestration tools like Docker and Kubernetes. - Understanding of cloud platforms such as AWS, GCP, and Azure, including their ML/AI service offerings. - Experience with continuous integration and delivery tools like Jenkins, GitLab CI/CD, or CircleCI. - Familiarity with infrastructure as code tools like Terraform or CloudFormation. Key Responsibilities: - Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. - Design and manage CI/CD pipelines specialized for ML workflows, including deploying generative models like GANs, VAEs, and Transformers. - Monitor and optimize AI model performance in production, utilizing tools for continuous validation, retraining, and A/B testing. - Collaborate with data scientists and ML researchers to translate model requirements into scalable operational frameworks. - Implement best practices for version control, containerization, and orchestration using industry-standard tools. - Ensure compliance with data privacy regulations and company policies during model deployment. - Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. - Stay updated with the latest MLOps and Generative AI developments to enhance AI capabilities. Project Delivery: - Design and implement scalable deployment pipelines for ML/GenAI models to transition them from development to production environments. - Oversee the setup of cloud infrastructure and automated data ingestion pipelines to meet GenAI workload requirements. - Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures. Client Engagement: - Collaborate with clients to understand their business needs and design ML/LLMOps solutions. - Present technical approaches and results to technical and non-technical stakeholders. - Conduct training sessions and workshops for client teams. - Create comprehensive documentation and user guides for clients. Innovation And Knowledge Sharing: - Stay updated with the latest trends in MLOps/LLMOps and Generative AI. - Develop internal tools and frameworks to accelerate model development and deployment. - Mentor junior team members and contribute to technical publications. Professional And Educational Background: - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Zimetrics is a technology services and solutions provider specializing in Data, AI, and Digital. We help enterprises leverage the economic potential and business value of data from systems, machines, connected devices, and human-generated content. Our core principles are Integrity, Intellect, and Ingenuity, guiding our value system, engineering expertise, and organizational behavior. We are problem solvers and innovators who challenge conventional wisdom and believe in possibilities. You will be responsible for designing scalable and secure cloud-based data architecture solutions. Additionally, you will lead data modeling, integration, and migration strategies across platforms. It will be essential to engage directly with clients to understand their business needs and translate them into technical solutions. Moreover, you will support sales/pre-sales teams with solution architecture, technical presentations, and proposals. Collaboration with cross-functional teams including engineering, BI, and product will also be a part of your role. Ensuring best practices in data governance, security, and performance optimization is a key responsibility. To be successful in this role, you must have strong experience with Cloud platforms such as AWS, Azure, or GCP. A deep understanding of Data Warehousing concepts and tools like Snowflake, Redshift, BigQuery, etc., is essential. Proven expertise in data modeling, including conceptual, logical, and physical modeling, is required. Excellent communication and client engagement skills are a must. Previous experience in pre-sales or solution consulting will be advantageous. You should also have the ability to present complex technical concepts to non-technical stakeholders effectively.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Technical Lead, you will spearhead the development and optimization of Core LOS (Loan Origination System) / LMS (Loan Management System) Applications. Your role is crucial in introducing and integrating new-age technologies to drive business growth while ensuring system stability, security, and scalability. You will lead and mentor the IT Applications team with a focus on Core LOS/LMS platforms. Your responsibilities include overseeing end-to-end application development, implementation, and lifecycle management. It is essential to ensure that application security, compliance, and performance standards are consistently met. Managing vendor partnerships and conducting regular audits of LOS/LMS platforms will also be part of your role. Monitoring project progress, optimizing resource utilization, and ensuring timely delivery are key aspects of this position. Key skills required for this role include proven experience in managing Core LOS/LMS applications, strong leadership and team-building abilities, knowledge of Fintech ecosystems and digital lending operations, exposure to new-age technologies like cloud platforms, APIs, automation, and analytics. Project management expertise with hands-on experience in Agile or similar methodologies, excellent communication, and stakeholder engagement skills, as well as an in-depth understanding of IT security, compliance, and data integrity are also crucial. Qualifications & Experience: - Bachelor's degree in IT, Computer Science, or a related field - 5+ years of experience in IT applications development and management - Background in Fintech or financial services technology preferred - Experience with digital transformation and process automation initiatives,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You are an experienced backend developer with 5.5+ years of total experience. You have extensive knowledge in back-end development using Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Web flux. Your expertise includes a good understanding of Data Structures, Object-Oriented Programming, and Design Patterns. You are well-versed in REST APIs and Microservices Architecture and proficient in working with Relational and NoSQL databases, preferably PostgreSQL and MongoDB. Experience with CI/CD tools such as Jenkins, GOCD, or CircleCI is essential for you. You are familiar with test automation tools like xUnit, Selenium, or JMeter, and have hands-on experience with Apache Kafka or similar messaging technologies. Exposure to automated testing frameworks, performance testing tools, containerization tools like Docker, orchestration tools like Kubernetes, and cloud platforms, preferably Google Cloud Platform (GCP) is required. You have a strong understanding of UML and design patterns, excellent problem-solving skills, and a continuous improvement mindset. Effective communication and collaboration with cross-functional teams are key strengths of yours. Your responsibilities include writing and reviewing high-quality code, thoroughly understanding functional requirements, and analyzing clients" needs. You should be able to envision the overall solution for defined functional and non-functional requirements, determine and implement design methodologies and tool sets, and lead/support UAT and production rollouts. Creating, understanding, and validating WBS and estimated effort for a given module/task, addressing issues promptly, giving constructive feedback to team members, troubleshooting and resolving complex bugs, and providing solutions during code/design reviews are part of your daily tasks. Additionally, you are expected to carry out POCs to ensure that suggested design/technologies meet the requirements. You hold a Bachelors or Masters degree in computer science, Information Technology, or a related field.,

Posted 3 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 3 weeks ago

Apply

15.0 - 24.0 years

37 - 50 Lacs

Pune

Work from Office

Job Overview: Technical leader with 15+ years of experience in architecting, implementing, and successfully launching Enterprise Cloud infrastructure, participate in RFP responses, and customer presentations and lead cloud related architecture discussions for ongoing projects. Responsibilities: 1. Presales Expertise: Collaborate with the sales team or RFP to understand customer requirements and challenges, translating them into tailored cloud solutions. Engage in pre-sales activities, including presentations, solution demonstrations, and technical discussions to showcase the value of cloud offerings. Participate in Presales activities, provide costing and sizing estimations. 2. Solution Design: Develop comprehensive cloud architecture designs based on customer needs, industry best practices, and emerging technologies. Propose optimised Solution meeting Security , availability, scalability and other non-functionals requirements of RFP/ Clients. Create detailed technical documentation, including network diagrams, deployment Diagram, Security diagram and specifications, implementation plans for proposed cloud solutions. Ensure proposed solution meets data protection and data residency regulation like (GDPR, DPDP ACT) and relevant privacy standards for particular Region or RFP requirement. Apply innovative thinking to support the business in delivering strategic objectives within Group Architecture standards. 3. Cloud Infrastructure Deployment: Design solution of cloud infrastructure ensuring the successful deployment of applications, data, and services on cloud platforms. Automation and CICD pipelines using languages and tools, such as Jenkins, Terraform, Ansible/Chef, Helm, Argo CD, Argo Rollouts. Implement disaster recovery and business continuity measures to minimize downtime. Utilized Site Reliability Engineer (SRE) principals and ensure they are reflected in design. Infrastructure as a code deployment 4. Security - Risk analysis and Threat modelling: Perform Risk assessment, security and threat analysis, Data classification, Setup IAM and Access controls. plan and propose high security solution that meets the latest standards and RFP/customer requirement 5. Collaboration: Collaborate with cross-functional teams, including sales, engineering, and support, to ensure the successful delivery of cloud solutions. Act as a bridge between technical and non-technical stakeholders, facilitating effective communication. Qualifications: Bachelors degree in Computer Science, Information Technology, or related field. Proven experience in a Cloud Architect or similar role, with a focus on pre-sales activities. Strong expertise in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in setting up Orchestrator services (Kubernetes, service Mesh etc) Should have clear understanding of solutions offered by Cloud Platforms and be able to perform trade-offs. In-depth knowledge of cloud architecture principles, Network security, and other cloud related best practices. Strong experience in setting up IAM / security groups and polices. Should have ability to architect enterprise grade landing zone with strict adherence to zoning requirements. Excellent communication and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Relevant certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect, GCP Architect).

Posted 3 weeks ago

Apply

10.0 - 15.0 years

13 - 18 Lacs

Hyderabad

Work from Office

The platform engineer is responsible for managing our Windows systems infrastructure across the enterprise to ensure our services are reliable, secure, scalable and performant. The role involves effectively working with other cross-functional teams within the organization to meet business objectives. What youll do: Design, deploy and manage Windows systems infrastructure and associated services i.e Active Directory, Group Policies, DNS services etc. Develop and automate system processes to reduce manual intervention and minimize operational overhead. Utilize scripting and programming languages to automate tasks Implement and manage infrastructure as code using tools like Terraform, CloudFormation, Ansible or similar technologies Actively participate in incident management processes, quickly identifying and resolving issues. Conduct root cause analysis to prevent future incidents Analyze and optimize system performance using monitoring and logging tools. Work to improve response times and decrease downtime. Collaborate closely with other engineering teams to enhance security, reliability and scalability Monitor infrastructure and analyze performance metrics to identify areas for improvement. Create and maintain documentation for systems, processes and procedures to ensure knowledge sharing across teams Stay updated on industry trends and emerging technologies What youll bring 10+ years of experience leveraging automation to manage Windows systems and infrastructure In-depth knowledge of cloud platforms such as AWS and Azure Experience with infrastructure as code (IaC) tools such as Terraform and CloudFormation Strong technical experience implementing, managing and supporting windows systems infrastructure Proficiency in one or more programming languages (Python, Powershell) Able to work flexible hours as required by business priorities; Available on a 24x7x365 basis when needed for production impacting incidents or key customer events. Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies