Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Job Description It is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Summary Database Engineer/ Developer - Core Skills Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. A good understanding of data security measures and compliance is also required. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes, and knowledge of cloud-based databases like AWS RDS and Google BigQuery. Min 5 years of experience. JD Database Engineer - Data Research Engineering Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Show more Show less
Posted 1 month ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Responsible AI Engineer Project Role Description : Assess AI systems for adherence to predefined thresholds and benchmarks related to responsible, ethical and sustainable practices. Design and implement technology mitigation strategies for systems to ensure ethical and responsible standards are achieved. Must have skills : Responsible AI Good to have skills : NA Educational Qualification : 15 years full time education Summary: As a Responsible AI Engineer, you will assess AI systems for adherence to predefined thresholds and benchmarks related to responsible, ethical, and sustainable practices. Design and implement technology mitigation strategies for systems to ensure ethical and responsible standards are achieved. Roles & Responsibilities: - Expected to be a SME with deep knowledge and experience. - Should have Influencing and Advisory skills. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Develop and implement responsible AI frameworks. - Conduct audits and assessments of AI systems for ethical compliance. - Collaborate with cross-functional teams to ensure responsible AI practices are integrated. Professional & Technical Skills: - Must To Have Skills: Proficiency in Responsible AI. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Bea able to align data models with business goals and enterprise architecture Collaborate with Data Architects, Engineers, Business Analysts, and Leadership teams Lead data modelling, governance discussions and decision-making across cross-functional teams Proactively identify data inconsistencies, integrity issues, and optimization opportunities Design scalable and future-proof data models Define and enforce enterprise data modelling standards and best practices Experience working in Agile environments (Scrum, Kanban) Identify impacted applications, size capabilities, and create new capabilities Lead complex initiatives with multiple cross-application impacts, ensuring seamless integration Drive innovation, optimize processes, and deliver high-quality architecture solutions Understand business objectives, review business scenarios, and plan acceptance criteria for proposed solution architecture Discuss capabilities with individual applications, resolve dependencies and conflicts, and reach agreements on proposed high-level approaches and solutions Participate in Architecture Review, present solutions, and review other solutions Work with Enterprise architects to learn and adopt standards and best practices Design solutions adhering to applicable rules and compliances Stay updated with the latest technology trends to solve business problems with minimal change or impact Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 8+ years of proven experience in a similar role, leading and mentoring a team of architects and technical leads Extensive experience with Relational, Dimensional, and NoSQL Data Modelling Experience in driving innovation, optimizing processes, and delivering high-quality solutions Experience in large scale OLAP, OLTP, and hybrid data processing systems Experience in complex initiatives with multiple cross-application impacts Expert in Erwin for Conceptual, Logical, and Physical Data Modelling Expertise in Relational Databases, SQL, indexing and partitioning for databases like Teradata, Snowflake, Azure Synapse or traditional RDBMS Expertise in ETL/ELT architecture, data pipelines, and integration strategies Expertise in Data Normalization, Denormalization and Performance Optimization Exposure to cloud platforms, tools, and AI-based solutions Solid knowledge of 3NF, Star Schema, Snowflake schema, and Data Vault Knowledge of Java, Python, Spring, Spring boot framework, SQL, Mongo DBS, KAFKA, React JS, Dynatrace, Power BI kind of exposure Knowledge of Azure Platform as a Service (PaaS) offerings (Azure Functions, App Service, Event grid) Good knowledge of the latest happenings in the technology world Advanced SQL skills for complex queries, stored procedures, indexing, partitioning, macros, recursive queries, query tuning and OLAP functions Understanding of Data Privacy Regulations, Master Data Management, and Data Quality Proven excellent communication and leadership skills Proven ability to think from a long-term perspective and arrive at intentional and strategic architecture Proven ability to provide consistent solutions across Lines of Business (LOB) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Skillsets And Attitudes Must have : Educational Background : Bachelors degree in computer science, Information Technology, or a related field. Technical Proficiency Hands-on experience with database technologies (e.g., Oracle, SQL Server, MySQL, PostgreSQL) with ability to work with large data sets Expertise in writing complex SQL queries for data manipulation and analysis Experience with one or more programming languages (e.g., Python, Java, C++, etc.). Strong understanding of data architecture principles. Skills in tuning database performance, including indexing, partitioning, and query optimization. Experience in implementing robust backup and recovery strategies Familiarity with cloud database services (AWS RDS, Azure SQL Database) is a must. Experience with data warehouses, distributed data platforms, and data lakes. Good To Have Certifications (Preferred, but not mandatory) : Certifications such as Oracle Certified Professional, Microsoft Certified Database Administrator, or equivalent are advantageous. Problem-Solving Skills : Strong analytical and problem-solving abilities. Adaptability : Ability to adapt to new technologies and changing requirements. Proficiency in data analytics and visualization tools Ability to navigate ambiguity and work in a fast-moving environment with multiple stakeholders. Excellent business and technical communication Your Core Role We're looking for a skilled Data Engineer to enhance our data systems. You will design and build the foundation of the data/analytics architecture for the organization. Your contributions will be vital in maintaining efficient and reliable database operations to support our organization's data needs. Key Responsibilities Design, build, and optimize the data architecture and extract, transform, and load (ETL) pipelines to make them accessible for Business Data Analysts, Data Scientists, and business users. Drive standards in data reliability, data integrity, and data governance, enabling accurate, consistent, and trustworthy data sets, business intelligence products, and analyses. Database Normalization to reduce redundancy, improve data integrity and design scalable database schemas Performance Tuning including indexing, partitioning, and query optimization. Review data organization and implement archival strategies to improve overall DB performance Database Security best practices and implement security measures Implement robust backup and recovery strategies. Troubleshooting to diagnose and resolve database issues effectively. Scripting and Automation (e.g., PowerShell, Python) for automating database tasks. What You Can Expect Five-Day workweek. Fair pay. The team will support you and push you. We will debate and question you. We will help you find what you are good at and let you take unilateral decisions there. We will prod you to get better at the things you are not good at. You will interact with our coordinators and field agents on the ground. You will also interact with decision-makers from within the social impact ecosystem. You will enable data driven business decisions, create aha moments through insights you generate, and create new opportunities through deep analysis using internal/ external data. You will use these insights to create better touchpoints that get the job done. You will need to get your hands dirty. You will be a part of discussion with teams like program delivery team, program management team, product team, CXOs and Engineering Team which means you will be working across disciplines. (ref:hirist.tech) Show more Show less
Posted 1 month ago
2.0 - 31.0 years
0 - 1 Lacs
Paldi, Ahmedabad Region
Remote
Key Responsibilities: Common: Develop and maintain custom ERP modules in Angular + Node.js architecture. Translate business logic into scalable RESTful APIs. Work with relational and NoSQL databases for ERP data structuring. Integrate intuitive UI/UX for internal and client-facing modules. Write clean, maintainable code with strong documentation. Understand ERP workflows: sales, purchase, HR, accounts, CRM, warehouse, etc. Perform testing, debugging, and deployment of ERP modules. Senior Developer: Design ERP architecture and lead module ownership. Mentor and review junior team members. Create microservice-based backend for scalable ERP logic. Suggest optimization for speed, performance, and usability. Handle API versioning, DevOps deployments, and database normalization. Junior Developer: Assist in ERP module development under senior guidance. Convert Figma/UI mockups into Angular components. Work on validations, UI logic, and frontend state management. Participate in testing, bug-fixing, and API consumption. Technical Skills – Must Have: Node.js with Express Angular 10+ with RxJS Experience in ERP modules (min. 1 live project) REST API & JSON workflows MongoDB / MySQL / PostgreSQL UI/UX design understanding (Figma/XD to HTML conversion) Git, GitHub / Bitbucket Agile / Scrum methodology
Posted 1 month ago
10.0 years
0 Lacs
India
Remote
Company Description Staffbee Solutions INC. is a company that focuses on providing quality staffing solutions by finding individuals with strong character attributes, educational backgrounds, practical skills, specialized knowledge, or work experience. The company aims to fulfill requirements with great quality and satisfaction for their clients. Role Overview: We are looking for a highly skilled and experienced ServiceNow professional (10+ years) to join our freelance technical interview panel . As a Panelist, you’ll play a critical role in assessing candidates for ServiceNow Developer, Admin, and Architect roles by conducting deep technical interviews and evaluating hands-on expertise, problem-solving skills, and platform knowledge. This is an excellent opportunity for technically strong freelancers who enjoy sharing their expertise, influencing hiring decisions, and working flexible hours remotely. Key Responsibilities: Conduct live technical interviews and evaluations over video calls (aligned to EST hours) Assess candidates’ practical expertise in: Core ServiceNow modules (ITSM, CMDB, Discovery, Incident/Change/Problem) Custom application development & configuration Client/Server-side scripting (JavaScript, Business Rules, UI Policies, Script Includes) Integrations (REST/SOAP APIs, Integration Hub) Flow Designer, Service Portal, ACLs, ATF, and CI/CD practices Review coding tasks and scenario-based architecture questions Provide detailed, structured feedback and recommendations to the hiring team Collaborate on refining technical evaluation criteria if needed Required Skills & Experience (Advanced Technical Expertise): 10+ years of extensive hands-on experience with the ServiceNow platform in enterprise-grade environments Strong command over ServiceNow Core Modules : ITSM, ITOM, CMDB, Asset & Discovery, Incident/Change/Problem/Knowledge Management Proven expertise in custom application development using scoped apps, App Engine Studio, and Now Experience UI Framework Deep proficiency in ServiceNow scripting , including: Server-side : Business Rules, Script Includes, Scheduled Jobs, GlideRecord, GlideAggregate Client-side : UI Policies, Client Scripts, UI Actions, GlideForm/GlideUser APIs Middleware logic for cross-platform communication and custom handlers Experience implementing Access Control Lists (ACLs) with dynamic filters and condition-based restrictions Expert in Service Portal customization using AngularJS widgets, Bootstrap, and custom REST endpoints Proficient in Integration Hub , Custom REST/SOAP APIs , OAuth 2.0 authentication, MID Server integrations, external system integration (e.g., SAP, Azure, Jira, Dynatrace, etc.) Hands-on with Flow Designer , Orchestration , and Event Management Expertise in ServiceNow CMDB , CI Class modeling, reconciliation rules, identification/normalization strategies, and dependency mappings Familiarity with ServiceNow Performance Tuning : Scheduled Jobs optimization, lazy loading, database indexing, client/server execution efficiency Working knowledge of Automated Test Framework (ATF) and integration with CI/CD pipelines (Jenkins, Git, Azure DevOps) Understanding of ServiceNow DevOps , version control, scoped app publishing, and update set migration best practices Knowledge of Security Operations (SecOps) and Governance, Risk & Compliance (GRC) is a plus Experience guiding architectural decisions, governance models, and platform upgrade strategies Prior experience conducting technical interviews, design evaluations , or acting as a technical SME/panelist Excellent communication and feedback documentation skills — able to clearly explain technical rationale and candidate assessments Comfortable working independently and engaging with global stakeholders during USA EST hours (after 8 PM IST) Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Title: ServiceNow Architect Location: Noida Experience: 7+ Job Summary We are seeking a highly skilled ServiceNow professional with deep expertise in Hardware Asset Management (HAM) , Software Asset Management (SAM) , and Configuration Management Database (CMDB) . The ideal candidate will play a key role in designing, implementing, and optimizing asset and configuration management solutions on the ServiceNow platform. This role requires both strong technical acumen and functional understanding of IT asset lifecycle and configuration management best practices. Key Responsibilities Design and configure ServiceNow modules including HAM, SAM, and CMDB to align with business goals and ITIL processes. Implement best practices for asset discovery, normalization, license compliance, and reconciliation using ServiceNow Discovery and IntegrationHub . Ensure CMDB data integrity and health through effective class models, data normalization, and relationship mapping. Define asset lifecycle workflows for hardware and software, from procurement to retirement. Integrate ServiceNow with third-party systems (e.g., SCCM, JAMF, Tanium, Flexera, AWS, Azure) for accurate asset and configuration data ingestion. Lead workshops with stakeholders to gather requirements and translate them into technical solutions. Establish and enforce governance, data quality, and reconciliation policies for CMDB and Asset Management. Collaborate with ITSM, ITOM, Security Ops, and Procurement teams to ensure data alignment across the platform. Mentor junior developers and provide technical oversight for asset and CMDB-related enhancements. Drive the roadmap for HAM/SAM/CMDB capabilities in alignment with ServiceNow's latest releases. Required Skills & Experience 5+ years of hands-on experience in ServiceNow with focus on HAM, SAM, and CMDB . Deep knowledge of ServiceNow Discovery , Asset Management Lifecycle , Software License Management , and CMDB design principles . Proficiency in JavaScript , Glide API , Flow Designer , and REST/SOAP integrations . Experience implementing ServiceNow SAM Professional and managing vendor software models, entitlements, and compliance. Familiarity with data ingestion sources and normalization techniques using ILMT , SCCM , BigFix , etc. Understanding of ITIL v3/v4 framework, especially around Asset, Configuration, and Change Management. Strong analytical and problem-solving skills, with attention to detail. Excellent communication and stakeholder management skills. Certifications- Would be great – Not Mandatory ServiceNow Certified System Administrator ServiceNow Certified Implementation Specialist – HAM / SAM / CMDB / Discovery ITIL v3 or v4 Foundation Certification ServiceNow Certified Technical Architect (a plus). Work on enterprise-scale ServiceNow implementations. Join a high-performing, collaborative ITSM/ITAM team. Opportunity to lead digital transformation initiatives using ServiceNow’s latest technologies. Flexible working environment and continuous learning culture. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-71493-1 Job Description Role Title : AVP, Enterprise Logging & Observability (L11) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Splunk is Synchrony's enterprise logging solution. Splunk searches and indexes log files and helps derive insights from the data. The primary goal is, to ingests massive datasets from disparate sources and employs advanced analytics to automate operations and improve data analysis. It also offers predictive analytics and unified monitoring for applications, services and infrastructure. There are many applications that are forwarding data to the Splunk logging solution. Splunk team including Engineering, Development, Operations, Onboarding, Monitoring maintain Splunk and provide solutions to teams across Synchrony. Role Summary/Purpose The role AVP, Enterprise Logging & Observability is a key leadership role responsible for driving the strategic vision, roadmap, and development of the organization’s centralized logging and observability platform. This role supports multiple enterprise initiatives including applications, security monitoring, compliance reporting, operational insights, and platform health tracking. This role lead platform development using Agile methodology, manage stakeholder priorities, ensure logging standards across applications and infrastructure, and support security initiatives. This position bridges the gap between technology teams, applications, platforms, cloud, cybersecurity, infrastructure, DevOps, Governance audit, risk teams and business partners, owning and evolving the logging ecosystem to support real-time insights, compliance monitoring, and operational excellence. Key Responsibilities Splunk Development & Platform Management Lead and coordinate development activities, ingestion pipeline enhancements, onboarding frameworks, and alerting solutions. Collaborate with engineering, operations, and Splunk admins to ensure scalability, performance, and reliability of the platform. Establish governance controls for source naming, indexing strategies, retention, access controls, and audit readiness. Splunk ITSI Implementation & Management - Develop and configure ITSI services, entities, and correlation searches. Implement notable events aggregation policies and automate response actions. Fine-tune ITSI performance by optimizing data models, summary indexing, and saved searches. Help identify patterns and anomalies in logs and metrics. Develop ML models for anomaly detection, capacity planning, and predictive analytics. Utilize Splunk MLTK to build and train models for IT operations monitoring. Security & Compliance Enablement Partner with InfoSec, Risk, and Compliance to align logging practices with regulations (e.g., PCI-DSS, GDPR, RBI). Enable visibility for encryption events, access anomalies, secrets management, and audit trails. Support security control mapping and automation through observability. Stakeholder Engagement Act as a strategic advisor and point of contact for business units, application, infrastructure, security stakeholders and business teams leveraging Splunk. Conduct stakeholder workshops, backlog grooming, and sprint reviews to ensure alignment. Maintain clear and timely communications across all levels of the organization. Process & Governance Drive logging and observability governance standards, including naming conventions, access controls, and data retention policies. Lead initiatives for process improvement in log ingestion, normalization, and compliance readiness. Ensure alignment with enterprise architecture and data classification models. Lead improvements in logging onboarding lifecycle time, automation pipelines, and selfservice ingestion tools. Mentor junior team members and guide engineering teams on secure, standardized logging practices. Required Skills/Knowledge Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Splunk Subject Matter Expert (SME) Strong hands-on understanding of Splunk architecture, pipelines, dashboards, and alerting, data ingestion, search optimization, and enterprise-scale operations. Experience supporting security use cases, encryption visibility, secrets management, and compliance logging. Splunk Development & Platform Management, Security & Compliance Enablement, Stakeholder Engagement & Process & Governance Experience with Splunk Premium Apps - ITSI and Enterprise Security (ES) minimally Experience with Data Streaming Platforms & tools like Cribl, Splunk Edge Processor. Proven ability to work in Agile environments using tools such as JIRA or JIRA Align. Strong communication, leadership, and stakeholder management skills. Familiarity with security, risk, and compliance standards relevant to BFSI. Proven experience leading product development teams and managing cross-functional initiatives using Agile methods. Strong knowledge and hands-on experience with Splunk Enterprise/Splunk Cloud. Design and implement Splunk ITSI solutions for proactive monitoring and service health tracking. Develop KPIs, Services, Glass Tables, Entities, Deep Dives, and Notable Events to improve service reliability for users across the firm Develop scripts (python, JavaScript, etc.) as needed in support of data collection or integration Develop new applications leveraging Splunk’s analytic and Machine Learning tools to maximize performance, availability and security improving business insight and operations. Support senior engineers in analyzing system issues and performing root cause analysis (RCA). Desired Skills/Knowledge Deep knowledge of Splunk development, data ingestion, search optimization, alerting, dashboarding, and enterprise-scale operations. Exposure to SIEM integration, security orchestration, or SOAR platforms. Knowledge of cloud-native observability (e.g. AWS/GCP/Azure logging). Experience in BFSI or regulated industries with high-volume data handling. Familiarity with CI/CD pipelines, DevSecOps integration, and cloud-native logging. Working knowledge of scripting or automation (e.g., Python, Terraform, Ansible) for observability tooling. Splunk certifications (Power User, Admin, Architect, or equivalent) will be an advantage . Awareness of data classification, retention, and masking/anonymization strategies. Awareness of integration between Splunk and ITSM or incident management tools (e.g., ServiceNow, PagerDuty) Experience with Version Control tools – Git, Bitbucket Eligibility Criteria Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Demonstrated success in managing large-scale logging platforms in regulated environments. Excellent communication, leadership, and cross-functional collaboration skills. Experience with scripting languages such as Python, Bash, or PowerShell for automation and integration purposes. Prior experience in large-scale, security-driven logging or observability platform development. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to interact effectively with team members and stakeholders. Knowledge of IT Service Management (ITSM) and monitoring tools. Knowledge of other data analytics tools or platforms is a plus. WORK TIMINGS : 01:00 PM to 10:00 PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L09+ Employees can apply. Level / Grade : 11 Job Family Group Information Technology Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer Location: Bangalore About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Engineer About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Engineering Lead Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 5–10 Years Compensation: Competitive + Performance-based incentives + Meaningful ESOPs 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, building the future of enterprise revenue intelligence. We offer a GenAI-powered conversational intelligence and real-time agent assist suite that transforms how large sales teams interact, close deals, and scale operations. We’re already live with enterprise clients across India, the UAE, and Southeast Asia , and our platform enables multilingual speech-to-text, AI-driven nudges, and contextual conversation coaching—backed by our proprietary LLMs and cutting-edge voice infrastructure. With backing from top-tier VCs and over 30 angel investors, we’re now hiring an Engineering Lead who can architect, own, and scale the core engineering stack as we prepare for 10x growth. 🌟 Role Overview As the Engineering Lead at Darwix AI , you’ll take ownership of our platform architecture, product delivery, and engineering quality across the board. You’ll work closely with the founders, product managers, and the AI team to convert fast-moving product ideas into scalable features. You will: Lead backend and full-stack engineers across microservices, APIs, and real-time pipelines Architect scalable systems for AI/LLM deployments Drive code quality, maintainability, and engineering velocity This is a hands-on, player-coach role —perfect for someone who loves building but is also excited about mentoring and growing a technical team. 🎯 Key Responsibilities🛠️ Technical Leadership Own technical architecture across backend, frontend, and DevOps stacks Translate product roadmaps into high-performance, production-ready systems Drive high-quality code reviews, testing practices, and performance optimization Make critical system-level decisions around scalability, security, and reliability 🚀 Feature Delivery Work with the product and AI teams to build new features around speech recognition, diarization, real-time coaching, and analytics dashboards Build and maintain backend services for data ingestion, processing, and retrieval from Vector DBs, MySQL, and MongoDB Create clean, reusable APIs (REST & WebSocket) that power our web-based agent dashboards 🧱 System Architecture Refactor monoliths into microservice-based architecture Optimize real-time data pipelines with Redis, Kafka, and async queues Implement serverless modules using AWS Lambda, Docker containers, and CI/CD pipelines 🧑🏫 Mentorship & Team Building Lead a growing team of engineers—guide on architecture, code design, and performance tuning Foster a culture of ownership, documentation, and continuous learning Mentor junior developers, review PRs, and set up internal coding best practices 🔄 Collaboration Act as the key technical liaison between Product, Design, AI/ML, and DevOps teams Work directly with founders on roadmap planning, delivery tracking, and go-live readiness Contribute actively to investor tech discussions, client onboarding, and stakeholder calls ⚙️ Our Tech Stack Languages: Python (FastAPI, Django), PHP (legacy support), JavaScript, TypeScript Frontend: HTML, CSS, Bootstrap, Mustache templates; (React.js/Next.js optional) AI/ML Integration: LangChain, Whisper, RAG pipelines, Transformers, Deepgram, OpenAI APIs Databases: MySQL, PostgreSQL, MongoDB, Redis, Pinecone/FAISS (Vector DBs) Cloud & Infra: AWS EC2, S3, Lambda, CloudWatch, Docker, GitHub Actions, Nginx DevOps: Git, Docker, CI/CD pipelines, Jenkins/GitHub Actions, load testing Tools: Jira, Notion, Slack, Postman, Swagger 🧑💼 Who You Are 5–10 years of professional experience in backend/full-stack development Proven experience leading engineering projects or mentoring junior devs Comfortable working in high-growth B2B SaaS startups or product-first orgs Deep expertise in one or more backend frameworks (Django, FastAPI, Laravel, Flask) Experience working with AI products or integrating APIs from OpenAI, Deepgram, HuggingFace is a huge plus Strong understanding of system design, DB normalization, caching strategies, and latency optimization Bonus: exposure to working with voice pipelines (STT/ASR), NLP models, or real-time analytics 📌 Qualities We’re Looking For Builder-first mindset – you love launching features fast and scaling them well Execution speed – you move with urgency but don’t break things Hands-on leadership – you guide people by writing code, not just processes Problem-solver – when things break, you own the fix and the root cause Startup hunger – you thrive on chaos, ambiguity, and shipping weekly 🎁 What We Offer High Ownership : Directly shape the product and its architecture from the ground up Startup Velocity : Ship fast, learn fast, and push boundaries Founding Engineer Exposure : Work alongside IIT-IIM-BITS founders with full transparency Compensation : Competitive salary + meaningful equity + performance-based incentives Career Growth : Move into an EM/CTO-level role as the org scales Tech Leadership : Own features end-to-end—from spec to deployment 🧠 Final Note This is not just another engineering role. This is your chance to: Own the entire backend for a GenAI product serving global enterprise clients Lead technical decisions that define our future infrastructure Join the leadership team at a startup that’s shipping faster than anyone else in the category If you're ready to build a product with 10x potential, join a high-output team, and be the reason why the tech doesn’t break at scale , this role is for you. 📩 How to Apply Send your resume to people@darwix.ai with the subject line: “Application – Engineering Lead – [Your Name]” Attach: Your latest CV or LinkedIn profile GitHub/portfolio link (if available) A short note (3–5 lines) on why you're excited about Darwix AI and this role Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This role will be a part of our growing Platform Solutions team. The primary responsibility would involve working on Oxane’s proprietary platforms in Private Credit+ space. The incumbent is expected to take lead on client projects assigned to her/him, working directly with top investment banks, Asset Management, Investment firms. This role is at the intersection of finance & technology (FinTech) and provides a steep learning curve in the evolving landscape of Private Credit+. The candidate will gain exposure to diverse asset classes and master the complexities of deal structures. Work directly with the clients on various investment transactions for performing asset backed loan portfolios, real estate and specialty financing. Analyse and comprehend modelling inputs from deal documents such as information memorandums, servicer reports, facility agreements and other legal reports and help onboard the deals on Oxane’s tech platform. Create comprehensive working file using excel, transform, and load data into database via standard ETL process. Use SQL to query data and create views to support report implementation. Configure report components using facts, dimensions, and configurator parameters in JavaScript. Assist the client in their specific needs for ad-hoc analytics, portfolio monitoring, surveillance and reporting. Act as Business analyst for Platform features, client changes and issues working closely with development team. Act as extended client team for asset management reporting, financial due diligence, post-deal analysis and business planning. Requirements: B.E. /B. Tech in Computer Science or IT along with MBA/PGDM in Finance is mandatory. Strong acumen for engineering technology-driven solutions will be preferred Good understanding of financial & lending/debt concepts, Advanced Excel functions, with SQL (must have – Advanced level), along with understanding of Data Architecture, Storage and Normalization Prior experience with similar engagement will be given preference Good attention to detail and a logical thought process to analyse large amounts of qualitative and quantitative data Strong written and verbal communication skills Self-starter personality and ability to work well under pressure in a fast-paced environment Show more Show less
Posted 1 month ago
0 years
3 - 9 Lacs
Hyderābād
On-site
- Experience programming in Java, C++, Python or related language - Experience with SQL and an RDBMS (e.g., Oracle) or Data Warehouse Customer addresses, Geospatial information and Road-network play a crucial role in Amazon Logistics' Delivery Planning systems. We own exciting science problems in the areas of Address Normalization, Geocode learning, Maps learning, Time estimations including route-time, delivery-time, transit-time predictions which are key inputs in delivery planning. As part of the Geospatial science team within Last Mile, you will partner closely with other scientists and engineers in a collegial environment to develop enterprise ML solutions with a clear path to business impact. The setting also gives you an opportunity to think about a complex large-scale problem for multiple years and building increasingly sophisticated solutions year over year. In the process there will be opportunity to innovate, explore SOTA and publish the research in internal and external ML conferences. Successful candidates will have deep knowledge of competing machine learning methods for large scale predictive modelling, natural language processing, semi-supervised & graph based learning. We also look for the experience to graduate prototype models to production and the communication skills to explain complex technical approaches to the stakeholders of varied technical expertise. Key job responsibilities As an Applied Scientist I, your responsibility will be to deliver on a well defined but complex business problem, explore SOTA technologies including GenAI and customize the large models as suitable for the application. Your job will be to work on a end-to-end business problem from design to experimentation and implementation. There is also an opportunity to work on open ended ML directions within the space and publish the work in prestigious ML conferences. About the team LMAI team owns WW charter for address and location learning solutions which are crucial for efficient Last Mile delivery planning, who also owns problems in the space of maps learning and travel time estimations. Experience implementing algorithms using both toolkits and self-developed code Have publications at top-tier peer-reviewed conferences or journals Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
8.0 years
0 Lacs
Orissa
Remote
No. of Positions: 1 Position: Data Integration Technical Lead Location: Hybrid or Remote Total Years of Experience: 8+ years Experience: 8+ years of experience in data integration, cloud technologies, and API-based integrations. At least 3 years in a technical leadership role overseeing integration projects. Proven experience in integrating cloud-based systems, on-premise systems, databases, and legacy platforms. Informatica Cloud (IICS) or Mulesoft certifications preferable. Technical Expertise: Expertise in designing and implementing integration workflows using IICS, Mulesoft, or other integration platforms. Proficient in integrating cloud and on-premise systems, databases, and legacy platforms using API integrations, REST/SOAP, and middleware tools. Strong knowledge of Salesforce CRM, Microsoft Dynamics CRM, and other enterprise systems for integration. Experience in creating scalable, secure, and high-performance data integration solutions. Deep understanding of data modelling, transformation, and normalization techniques for integrations. Strong experience in troubleshooting and resolving integration issues. Key Responsibilities: Work with architects and client stakeholders to design data integration solutions that align with business needs and industry best practices. Lead the design and implementation of data integration pipelines, frameworks, and cloud integrations. Lead and mentor a team of data integration professionals, conducting code reviews and ensuring high-quality deliverables. Design and implement integrations with external systems using APIs, middleware, and cloud services. Develop data transformation workflows and custom scripts to integrate data between systems. Stay updated on new integration technologies and recommend improvements as necessary. Excellent verbal and written communication skills to engage with both technical and non-technical stakeholders. Proven ability to explain complex technical concepts clearly and concisely. Don’t see a role that fits? We are growing rapidly and always on the lookout for passionate and smart engineers! If you are passionate about your career, reach out to us at careers@hashagile.com.
Posted 1 month ago
0 years
4 - 9 Lacs
Noida
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant – Java Developer In this role, you will be responsible for Developing Microsoft Access Databases, including tables, queries, forms and reports, using standard IT processes, with data normalization and referential integrity. Responsibilities Responsible to collaborate with businesspeople to have a real time understanding of business problems and expected to focus on agile methodology of development. Struts 6 (Good to have worked on Struts 6.0 version but even if worked on Struts 2.0 and knowledge of Struts 6 should work. Struts is Mandatory). Deliver high quality change within the deadlines. In this role, you will be responsible for coding, testing and delivering high quality deliverables along with the reviews of the team members. Should be willing to learn new technologies. Understand and effectively communicate interactions between the front end and back-end systems. Qualifications we seek in you! Minimum Qualifications BE /B.Tech/M.Tech/MCA Preferred qualifications Java (1.8 or higher), Spring Boot framework (Core, AOP, Batch, JMS), Web Services (SOAP/REST), Oracle PL/SQL, Microservices, SQL Experienced working on Java Script (ExtJs framework), J2EE, Spring Boot, REST, JSON, Micro Services. Experience in TCF Framework (This is Homegrown Java framework from CVS so the Resources may not have experience in this. But experience in any similar MVC Framework like Struts, JSF other MVC framework should be good) Experience with IBM WebSphere server Experience with version control tools like Dimensions. Experience with HTML, XML & XSLT. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Noida Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 17, 2025, 5:42:29 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: Are you a passionate backend engineer looking to make a significant impact? Join our cross-functional, distributed team responsible for building and maintaining the core backend functionalities that power our customers. You’ll be instrumental in developing scalable and robust solutions, directly impacting on the efficiency and reliability of our platform. This role offers a unique opportunity to work on cutting-edge technologies and contribute to a critical part of our business, all within a supportive and collaborative environment. Role: Junior .Net Engineer Location: Hyderabad Experience: 3 to 5 years Job Type: Full Time Employment What You'll Do: Implement feature/module as per design and requirements shared by Architect, Leads, BA/PM using coding best practices Develop, and maintain microservices using C# and .NET Core perform unit testing as per code coverage benchmark. Support testing & deployment activities Micro-Services - containerized micro-services (Docker/Kubernetes/Ansible etc.) Create and maintain RESTful APIs to facilitate communication between microservices and other components. Analyze and fix defects to develop high standard stable codes as per design specifications. Utilize version control systems (e.g., Git) to manage source code. Requirement Analysis: Understand and analyze functional/non-functional requirements and seek clarifications from Architect/Leads for better understanding of requirements. Participate in estimation activity for given requirements. Coding and Development: Writing clean and maintainable code using best practices of software development. Make use of different code analyzer tools. Follow TTD approach for any implementation. Perform coding and unit testing as per design. Problem Solving/ Defect Fixing: Investigate and debug any defect raised. Finding root causes, finding solutions, exploring alternate approaches and then fixing defects with appropriate solutions. Fix defects identified during functional/non-functional testing, during UAT within agreed timelines. Perform estimation for defect fixes for self and the team. Deployment Support: Provide prompt response during production support Expertise You'll Bring: Language – C# Visual Studio Professional Visual Studio Code .NET Core 3.1 onwards Entity Framework with code-first approach Dependency Injection Error Handling and Logging SDLC Object-Oriented Programming (OOP) Principles SOLID Principles Clean Coding Principles Design patterns API Rest API with token-based Authentication & Authorization Postman Swagger Database Relational Database: SQL Server/MySQL/ PostgreSQL Stored Procedures and Functions Relationships, Data Normalization & Denormalization, Indexes and Performance Optimization techniques Preferred Skills Development Exposure to Cloud: Azure/GCP/AWS Code Quality Tool – Sonar Exposure to CICD process and tools like Jenkins etc., Good understanding of docker and Kubernetes Exposure to Agile software development methodologies and ceremonies Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a value-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.” Show more Show less
Posted 1 month ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction Joining the IBM Technology Expert Labs teams means you'll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you'll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities This Candidate is responsible for: DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 15+ years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills. Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience in data modeling tools (e.g., ER/Studio, ERwin). Knowledge of data governance and compliance standards (e.g., GDPR, HIPAA). soft skills. Show more Show less
Posted 1 month ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction Joining the IBM Technology Expert Labs teams means you'll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you'll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities This Candidate is responsible for: DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 15+ years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience in data modeling tools (e.g., ER/Studio, ERwin). Knowledge of data governance and compliance standards (e.g., GDPR, HIPAA).Soft Skills Show more Show less
Posted 1 month ago
0.0 - 1.0 years
0 Lacs
Thergaon, Pune, Maharashtra
On-site
PHP Developer Company Name: SiGa Systems Pvt. Ltd. S iGa Systems is the fastest-growing IT software development company that enables successful technology-based digital transformation initiatives for enterprises, to create a business that is connected, open, intelligent, and scalable. We are an offshore Web development company with clients all across the globe. Since our inception in the year 2016, we have provided web and application development services for varied business domains. Job Description in Brief: We are looking for 0 to 6 months of experience candidates proficient in PHP/ WordPress/Laravel/ CodeIgniter / to develop Websites and web applications in core PHP. The desired candidate would be involved in a full software/ website development life cycle starting from requirement analysis to testing. The candidate should be able to work in a team or should be able to handle projects independently. Company Address : Office No. 101, Metropole, Near BRT Bus Stop, Dange Chowk, Thergaon, Pune, Maharashtra – 411 033 Company Website: https://sigasystems.com/ Qualification: BE/ B. Tech/ M. Tech/ MCA/ MCS/ MCM Work Experience: 0 to 6 months Annual CTC Range: As per company norms & Market Standard Technical Key skills: · Expertise in MVC, PHP Framework ( Laravel , CodeIgniter) WCF, Web API, and Entity Framework. · Proficient in jQuery, AJAX, Bootstrap, · Good knowledge in HTML5, CSS3, JavaScript, SQL Server , WordPress , MySQL. · Hands-on core PHP along with experience in AJAX, jQuery, Bootstrap, APIs · Experience with Project Management systems like Jira, Trello, Click, Bug herd, Basecamp, etc. · High proficiency with Git. · Experience with RESTful APIs · Able to work with a team. · Must have good communication skills. Desired Competencies: Bachelor’s degree in Computer Science or related field. Good expertise in Core PHP along with working exposure in HTML, HTML 5, JavaScript, CSS, Ajax, jQuery, Bootstrap, and APIs. PHP Scripting with MVC architecture Frameworks like CodeIgniter and Laravel. Knowledge of Linux, Web application development, and Quality software development. Optimizing MySQL Queries and databases to improve performance. Excellent conceptual, analytical, and programming skills. Knowledge of Object-Oriented Programming (OOPS) concepts with Smarty and AJAX. Should be well-versed with OS: Linux/ UNIX, Windows (LAMP and WAMP). Knowledge in Relational Database Management Systems, Database design, and Normalization. Preference will be given if you hold working knowledge of Open Source like WordPress, Shopify, and other open-source e-commerce systems. Good communication skills (spoken/written) will be a plus. Must be technically and logically strong. Industry: IT-Software / Software Services Functional Area: IT Software – Design & Developer Role Category: Developer Role: PHP Developer/Laravel Employment Type: Permanent Job, Full Time Roles & Responsibilities: Should be responsible for developing websites and Web Based Applications using Open-Source systems. Monitor, manage, and maintain the server environments where PHP Laravel applications are hosted, ensuring optimal performance, security, and availability. Integrate third-party APIs and services as needed. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. Actively participate in quality assurance activities including design and code reviews, unit testing, defect fixes, and operational readiness. Diagnose and resolve server-related issues, including those impacting the performance of Laravel applications. This includes debugging server errors, analyzing logs, and identifying root causes of downtime or slow response times. Manage development projects from inception to completion autonomously and independently Provide administrative support, tools, and documentation for specific development projects. Design applications and database structures for performance and scalability. Deliver accurate project requirements and timeline estimates, providing regular feedback and consistently meeting project deadlines. Designing and implementing web-based back-end components that are high-performing and scalable. Participating in and improving development processes and tools for other development teams. Contribute ideas and efforts towards the project and work as part of a team to find solutions to various problems. If this opportunity feels like the perfect match for you, don’t wait—apply now! Reach out to us via email or WhatsApp using the details below. Let’s connect and create something extraordinary together! Contact Person Name: HR Riddhi Email: hr@sigasystems.com WhatsApp: +91 8873511171 Job Type: Full-time Pay: ₹12,500.00 - ₹14,000.00 per month Benefits: Paid sick time Paid time off Schedule: Rotational shift Education: Bachelor's (Preferred) Experience: total work: 1 year (Preferred) PHP/LARAVEL: 1 year (Preferred) Language: English (Preferred) Expected Start Date: 16/07/2025
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Role : Database Engineer Location : Remote Notice Period : 30 Days Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
India
Remote
Job title : Data Engineer Experience: 5–8 Years Location: Remote Shift: IST (Indian Standard Time) Contract Type: Short-Term Contract Job Overview We are seeking an experienced Data Engineer with deep expertise in Microsoft Fabric to join our team on a short-term contract basis. You will play a pivotal role in designing and building scalable data solutions and enabling business insights in a modern cloud-first environment. The ideal candidate will have a passion for data architecture, strong hands-on technical skills, and the ability to translate business needs into robust technical solutions. Key Responsibilities Design and implement end-to-end data pipelines using Microsoft Fabric components (Data Factory, Dataflows Gen2). Build and maintain data models , semantic layers , and data marts for reporting and analytics. Develop and optimize SQL-based ETL processes integrating structured and unstructured data sources. Collaborate with BI teams to create effective Power BI datasets , dashboards, and reports. Ensure robust data integration across various platforms (on-premises and cloud). Implement mechanisms for data quality , validation, and error handling. Translate business requirements into scalable and maintainable technical solutions. Optimize data pipelines for performance and cost-efficiency . Provide technical mentorship to junior data engineers as needed. Required Skills Hands-on experience with Microsoft Fabric : Dataflows Gen2, Pipelines, OneLake. Strong proficiency in Power BI , including semantic modeling and dashboard/report creation. Deep understanding of data modeling techniques: star schema, snowflake schema, normalization, denormalization. Expertise in SQL , stored procedures, and query performance tuning. Experience integrating data from diverse sources: APIs, flat files, databases, and streaming. Knowledge of data governance , lineage, and data catalog tools within the Microsoft ecosystem. Strong problem-solving skills and ability to manage large-scale data workflows. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : PLSQL,SQL Writing,mSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Primary Responsibility: Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Preferred Skills (Nice to Have) Experience with ETL processes and data warehousing. Knowledge of cloud-based databases (AWS RDS, Google BigQuery, Azure SQL). Familiarity with database performance tuning and indexing strategies. Exposure to Python or other scripting languages for database automation. Experience with business intelligence (BI) tools like Power BI or Tableau. What We Offer Fully remote internship with flexible working hours. Hands-on experience with real-world database projects. Mentorship from experienced database professionals. Certificate of completion and potential for a full-time opportunity based on performance. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0.0 - 12.0 years
0 Lacs
Pune, Maharashtra
On-site
You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10363280 Date posted 06/17/2025 End Date 06/30/2025 City Pune State/Region Maharashtra Country India Additional Locations Noida, Uttar Pradesh Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What Does a great Data Architecture do at Fiserv? We are seeking a seasoned Data Architect with extensive experience in data modeling and architecting data solutions, particularly with Snowflake. The ideal candidate will have 8-12 years of hands-on experience in designing, implementing, and optimizing data architectures to meet the evolving needs of our organization. As a Data Architect, you will play a pivotal role in ensuring the robustness, scalability, and efficiency of our data systems. What you will do: Data Architecture Design: Develop, optimize, and oversee conceptual and logical data systems, ensuring they meet both current and future business requirements. Data Modeling: Create and maintain data models using Snowflake, ensuring data integrity, performance, and security. Solution Architecture: Design and implement end-to-end data solutions, including data ingestion, transformation, storage, and access. Stakeholder Collaboration: Work closely with business stakeholders, data scientists, and engineers to understand data requirements and translate them into technical specifications. Performance Optimization: Monitor and improve data system performance, addressing any issues related to scalability, efficiency, and data quality. Governance and Compliance: Ensure data architectures comply with data governance policies, standards, and industry regulations. Technology Evaluation: Stay current with emerging data technologies and assess their potential impact and value to the organization. Mentorship and Leadership: Provide technical guidance and mentorship to junior data architects and engineers, fostering a culture of continuous learning and improvement. What you will need to have: 8-12 Years of Experience in data architecture and data modeling in Snowflake. Proficiency in Snowflake data warehousing platform. Strong understanding of data modeling concepts, including normalization, denormalization, star schema, and snowflake schema. Experience with ETL/ELT processes and tools. Familiarity with data governance and data security best practices. Knowledge of SQL and performance tuning for large-scale data systems. Experience with cloud platforms (AWS, Azure, or GCP) and related data services. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to translate technical concepts for non-technical stakeholders. Demonstrated ability to lead and mentor technical teams. What would be nice to have: Education: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Certifications: Snowflake certifications or other relevant industry certifications. Industry Experience: Experience in Finance/Cards/Payments industry Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 month ago
2.0 years
0 Lacs
Port Blair, Andaman and Nicobar Islands, India
On-site
Job Title: Database Developer Location: Calicut, Kerala (On-site) Experience: Minimum 2 Years Job Type: Full-time Notice: immediate/15 days Candidates from Kerala are highly preferred. Job Summary: We are hiring a skilled and detail-oriented Database Developer with at least 2+ years of experience to join our team in Calicut. The ideal candidate will have hands-on expertise in SQL and PostgreSQL, with a strong understanding of database design, development, and performance optimization. Experience with Azure cloud services is a plus. Key Responsibilities: Design, develop, and maintain database structures, stored procedures, functions, and triggers Write optimized SQL queries for integration with applications and reporting tools Ensure data integrity, consistency, and security across platforms Monitor and tune database performance for high availability and scalability Collaborate with developers and DevOps teams to support application development Maintain and update technical documentation related to database structures and processes Assist in data migration and backup strategies Work with cloud-based databases and services (preferably on Azure) Required Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field Minimum 2 years of experience as a Database Developer or similar role Strong expertise in SQL and PostgreSQL database development Solid understanding of relational database design and normalization Experience in writing complex queries, stored procedures, and performance tuning Familiarity with version control systems like Git Strong analytical and troubleshooting skills Preferred Qualifications: Experience with Azure SQL Database, Data Factory, or related services Knowledge of data warehousing and ETL processes Exposure to NoSQL or other modern database technologies is a plus Show more Show less
Posted 1 month ago
9.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Having 9+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenario's like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). Good communication skills (ref:hirist.tech) Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France