Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
6 - 9 Lacs
hyderābād
On-site
Location : Hyderabad, India About DispatchTrack DispatchTrack is the global leader of last mile logistics software, helping top brands powering over 18 million deliveries a year. Since 2010, DispatchTrack’s scalable SaaS platform has made delivery organizations more connected, agile, and intelligent, using highly-configurable capabilities designed to empower better delivery management from end to end. We’re constantly innovating to improve performance and better serve our 2,000+ customers, including Wal-Mart, Coca-Cola, Ashley, Ferguson Enterprises, and many others. When businesses make promises to their customers—DispatchTrack makes sure they deliver. Promise. Deliver. Delight. Job Description: We are looking for Application Engineer with experience building coding agents and delivering scalable, high-performance applications. The role requires strong full-stack development expertise across multiple frameworks and databases, with AI/ML exposure. Responsibilities: Design and develop Using AI coding agents (cursor ai, windsurf, codex etc.) to automate development tasks Build and maintain applications using Ruby on Rails, Node.js, React, Angular, and Python Integrate and optimize databases (MySQL, PostgreSQL, ClickHouse) Develop API integrations with internal/external platforms Ensure application scalability, performance, and security Collaborate with cross-functional teams to deliver automation-driven solutions Maintain documentation and coding best practices Develop robust backend services and APIs with focus on scalability, maintainability, and security. Optimize database design, indexing, and query performance for high-volume data operations. Implement asynchronous processing, job queues, and caching strategies (e.g., Redis, Kafka, Sidekiq). Apply system design principles for building distributed and fault-tolerant applications. Conduct code reviews, design discussions, and knowledge sharing to maintain engineering standards. Problem-solving & innovation mindset Strong communication and team collaboration skills. Must Have: Strong proficiency in server-side frameworks: Ruby on Rails, Node.js (backend-focused modules), Python (for APIs/scripts). Deep expertise in relational databases (MySQL, PostgreSQL): query optimization, schema normalization/denormalization, transactions, indexing. Understanding of caching mechanisms, job scheduling, and data pipelines. Qualifications: Minimum first-class Bachelor's Degree in Engineering from Premium Institutes. Minimum 3+ years of experience as Application Engineer. Please email careers.india@dispatchtrack.com to apply.
Posted 7 hours ago
5.0 years
0 Lacs
hyderabad, telangana, india
Remote
Company Description International Solutions Group (ISG) is a professional staffing firm specializing in contract, consulting, and full-time positions in Information Technology, Life Science, and Allied Healthcare areas. Since its inception in 2002, ISG has grown in size, opportunities, service, and quality. Our expert team of highly trained consultants works collaboratively with every client to form a partnership of commitment. Learn more about ISG’s IT services, capabilities, and experiences on our website: https://isgit.com/. If you're interested in working with us, please send your resume to hr@isgit.com. Role Description About the Role Looking for a Database Engineer to join our team. We are looking for someone passionate about database technologies and who enjoys working in a collaborative environment. Position: Database Engineer Experience: 5+ years Notice Period : Immediate to 15 Days Locations: Nagpur (Hybrid) , Others (Remote, Need to visit the office if it Required ) The day-to-day: Analyze project/task requirements, assess their impact on existing PostgreSQL architecture, and design and develop functions, stored procedures, triggers, and views using PL/PLSQL. Design, develop, and maintain PostgreSQL database infrastructure while ensuring high performance, scalability, and reliability. Break down complex business problems into task-based deliverables, including schema design, data modeling, and migration plans. Optimize query performance using EXPLAIN ANALYZE, indexing strategies (B-tree, GIN, GiST, BRIN), partitioning, and materialized views. Implement high availability using streaming replication, logical replication, and automated failover tools like Patroni or pg_auto_failover. Configure and tune database parameters (e.g. work_mem, shared_buffers, maintenance_work_mem) for optimal performance. Plan and implement backup and recovery strategies using tools like pgBackRest and native PITR (Point-in-Time Recovery) mechanisms. Enforce database security via role-based access control (RBAC), Row-Level Security (RLS), encryption at rest and in transit, and auditing using pgaudit. Integrate PostgreSQL with other systems using Foreign Data Wrappers (FDW) and support hybrid relational/JSONB data models. Set up observability using pg_stat_statements, pg_stat_activity, and metrics integration with Prometheus/Grafana. Collaborate closely with cross-functional teams to ensure database designs support business and application requirements. Participate in CI/CD pipelines for schema versioning and migrations using Liquibase or Flyway. Ensure cost-efficient and secure deployments on managed services like Amazon RDS, Azure Database for PostgreSQL, or Cloud SQL. Who You Are At least 5 years of experience as a PostgreSQL Developer or DBA. Strong proficiency in SQL and PL/plSQL with expertise in writing complex queries, window functions, CTEs, and recursive queries. Proven experience in schema design, normalization/denormalization strategies, and database lifecycle management. Deep understanding of indexing strategies, query planning, and performance tuning. Skilled at diagnosing and resolving database performance issues and production incidents. Familiarity with high availability, replication, backup/recovery, and database security best practices. Experience running PostgreSQL in cloud environments (Amazon Web Services, Microsoft Azure, or Google Cloud Platform). Strong analytical and problem-solving skills, with the ability to work independently and collaboratively across teams. Effective communication skills for interacting with developers, architects, and other stakeholders. If interested kindly acknowledge this mail and furnish the below details, Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice period at present organization: Are you holding any other offer/N: Current Location : Tentative Joining Date If got selected : Please attach your 3-month Pay slips : Please attach your updated CV : Please attach your PAN Card : Please attach your UAN Number:
Posted 12 hours ago
7.0 years
0 Lacs
noida
Remote
Principal Backend Engineer WHAT MAKES US, US Join some of the most innovative thinkers in FinTech as we lead the evolution of financial technology. If you are an innovative, curious, collaborative person who embraces challenges and wants to grow, learn and pursue outcomes with our prestigious financial clients, say Hello to SimCorp! At its foundation, SimCorp is guided by our values — caring, customer success-driven, collaborative, curious, and courageous. Our people-centered organization focuses on skills development, relationship building, and client success. We take pride in cultivating an environment where all team members can grow, feel heard, valued, and empowered. If you like what we’re saying, keep reading! WHY THIS ROLE IS IMPORTANT TO US As a Principal Backend Engineer, you will play a key role in driving technical excellence and leading backend development within your team. You will work across all stages of the development lifecycle—designing, coding, testing, reviewing, and deploying solutions—while mentoring team members and influencing technical strategy. You will leverage your expertise in .NET technologies, databases, and the Azure cloud ecosystem to build scalable, high-performance solutions that power business-critical applications. This role requires a strong mix of hands-on engineering, architectural thinking, and leadership to ensure both immediate delivery and long-term sustainability. WHAT WILL YOU BE RESPONSIBLE FOR Apply engineering expertise across all phases of the development lifecycle: requirements review, design, implementation, code reviews, and automated testing. Work closely with product owners, architects, and stakeholders to translate business needs into scalable technical solutions. Establish and continuously improve engineering excellence practices (coding standards, testing strategies, CI/CD pipelines). Drive the development of new backend features and system enhancements using .NET, databases, and Azure technologies. Ensure code quality through refactoring, automated testing, and peer reviews. Lead adoption of best practices in cloud-native development, security, and scalability on Azure. Research and evaluate tools, frameworks, and approaches to support future innovations. Collaborate within a scrum team to consistently deliver on sprint commitments and meet the Definition of Done. Act as a mentor and role model, raising technical capabilities across the engineering team. WHAT WE VALUE Essential (Need to Have) 7+ years of hands-on experience with .NET technologies (C#, Microsoft .NET), including at least 2 years in a leading role. Proven experience building REST APIs with .NET (OData experience preferred). Strong working knowledge of Azure cloud services, including: Azure App Services / Azure Functions Azure API Management Azure DevOps (Pipelines, Repos, Boards) Azure Service Bus / Event Grid etc Strong database expertise: Proficiency with SQL Server (design, optimization, indexing, performance tuning). Understanding of data modeling, normalization/denormalization, and query optimization. Experience with database versioning/migrations. Proficient understanding of design patterns and principles (OOP, GoF, SOLID). Strong background in software development practices such as TDD, BDD. Hands-on experience with peer code reviews and code quality tools (e.g., SonarQube). Proficiency in CI/CD pipelines and deep understanding of DevOps practices. Strong fundamentals in software testing methodologies and automation frameworks. Experience with Git and collaborative workflows (feature branching, pull requests). Solid experience in Agile/Scrum environment. Excellent problem-solving skills with the ability to make sound architectural and technical decisions. Desirable (Nice to Have) Experience with JavaScript/TypeScript frameworks (Angular, Node.js). Familiarity with containerization and orchestration technologies (Docker, Kubernetes). Knowledge of data warehousing concepts and ETL/ELT pipelines (e.g., Azure Data Factory, Synapse). Experience with monitoring and observability tools (e.g., Application Insights, Grafana, Prometheus). Exposure to event-driven architectures and microservices with distributed databases. Personal Competencies Comfortable working in an international, multicultural environment. Positive, constructive mindset—you bring energy, collaboration, and positive vibes to your team. Strong sense of responsibility for both technical quality and a healthy team culture. Excellent communication skills with fluency in English (speaking and writing). Natural leadership skills: able to inspire, mentor, and guide engineers while remaining hands-on. BENEFITS Global hybrid work policy - We ask you to work 2 days a week from the office. If you choose you can work remotely the other days. Of course you are welcome at the office if that is your preference. Growth and innovation - Every 6th sprint is reserved for planning and innovation. So, with regular intervals you have a chance to explore and learn new skills or improve something that you believe will be beneficial to you, your team or the application. Self-Direction - High degree of self-organization. Each team and developer have a high degree of freedom to plan, organize and design their work. Inclusive and diverse company culture Work-life balance – We believe that an equilibrium between professional responsibilities makes us all the best version of ourselves, both in private life and as colleagues in the workplace Empowerment – We believe that all voices are valuable and must be heard. You will be involved in shaping our work processes NEXT STEPS Please send us your application in English via our career site as soon as possible, we process incoming applications continually. Please note that only applications sent through our system will be processed. At SimCorp, we recognize that bias can unintentionally occur in the recruitment process. To uphold fairness and equal opportunities for all applicants, we kindly ask you to exclude personal data such as photo, age, or any non-professional information from your application. Thank you for aiding us in our endeavor to mitigate biases in our recruitment process. If you are interested in being a part of SimCorp but are not sure this role is suitable, submit your CV anyway. SimCorp is on an exciting growth journey, and our Talent Acquisition Team is ready to assist you discover the right role for you. The approximate time to consider your CV is three weeks We are eager to continually improve our talent acquisition process and make everyone’s experience positive and valuable. Therefore, during the process we will ask you to provide your feedback, which is highly appreciated. WHO WE ARE For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. SimCorp is an independent subsidiary of the Deutsche Börse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. SimCorp is an equal opportunity employer and welcome applicants from all backgrounds, without regard to race, gender, age, disability, or any other protected status under applicable law. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. #Li-Hybrid
Posted 2 days ago
7.0 years
0 Lacs
noida, uttar pradesh, india
Remote
Principal Backend Engineer What Makes Us, Us Join some of the most innovative thinkers in FinTech as we lead the evolution of financial technology. If you are an innovative, curious, collaborative person who embraces challenges and wants to grow, learn and pursue outcomes with our prestigious financial clients, say Hello to SimCorp! At its foundation, SimCorp is guided by our values — caring, customer success-driven, collaborative, curious, and courageous. Our people-centered organization focuses on skills development, relationship building, and client success. We take pride in cultivating an environment where all team members can grow, feel heard, valued, and empowered. If you like what we’re saying, keep reading! WHY THIS ROLE IS IMPORTANT TO US As a Principal Backend Engineer , you will play a key role in driving technical excellence and leading backend development within your team. You will work across all stages of the development lifecycle—designing, coding, testing, reviewing, and deploying solutions—while mentoring team members and influencing technical strategy. You will leverage your expertise in .NET technologies , databases , and the Azure cloud ecosystem to build scalable, high-performance solutions that power business-critical applications. This role requires a strong mix of hands-on engineering, architectural thinking, and leadership to ensure both immediate delivery and long-term sustainability. What Will You Be Responsible For Apply engineering expertise across all phases of the development lifecycle: requirements review, design, implementation, code reviews, and automated testing. Work closely with product owners, architects, and stakeholders to translate business needs into scalable technical solutions. Establish and continuously improve engineering excellence practices (coding standards, testing strategies, CI/CD pipelines). Drive the development of new backend features and system enhancements using .NET, databases, and Azure technologies. Ensure code quality through refactoring, automated testing, and peer reviews. Lead adoption of best practices in cloud-native development, security, and scalability on Azure. Research and evaluate tools, frameworks, and approaches to support future innovations. Collaborate within a scrum team to consistently deliver on sprint commitments and meet the Definition of Done. Act as a mentor and role model, raising technical capabilities across the engineering team. What We Value Essential (Need to Have) 7+ years of hands-on experience with .NET technologies (C#, Microsoft .NET), including at least 2 years in a leading role. Proven experience building REST APIs with .NET (OData experience preferred). Strong working knowledge of Azure cloud services, including: Azure App Services / Azure Functions Azure API Management Azure DevOps (Pipelines, Repos, Boards) Azure Service Bus / Event Grid etc Strong database expertise: Proficiency with SQL Server (design, optimization, indexing, performance tuning). Understanding of data modeling, normalization/denormalization, and query optimization. Experience with database versioning/migrations. Proficient understanding of design patterns and principles (OOP, GoF, SOLID). Strong background in software development practices such as TDD, BDD. Hands-on experience with peer code reviews and code quality tools (e.g., SonarQube). Proficiency in CI/CD pipelines and deep understanding of DevOps practices. Strong fundamentals in software testing methodologies and automation frameworks. Experience with Git and collaborative workflows (feature branching, pull requests). Solid experience in Agile/Scrum environment. Excellent problem-solving skills with the ability to make sound architectural and technical decisions. Desirable (Nice to Have) Experience with JavaScript/TypeScript frameworks (Angular, Node.js). Familiarity with containerization and orchestration technologies (Docker, Kubernetes). Knowledge of data warehousing concepts and ETL/ELT pipelines (e.g., Azure Data Factory, Synapse). Experience with monitoring and observability tools (e.g., Application Insights, Grafana, Prometheus). Exposure to event-driven architectures and microservices with distributed databases. Personal Competencies Comfortable working in an international, multicultural environment. Positive, constructive mindset—you bring energy, collaboration, and positive vibes to your team. Strong sense of responsibility for both technical quality and a healthy team culture. Excellent communication skills with fluency in English (speaking and writing). Natural leadership skills: able to inspire, mentor, and guide engineers while remaining hands-on. Benefits Global hybrid work policy - We ask you to work 2 days a week from the office. If you choose you can work remotely the other days. Of course you are welcome at the office if that is your preference. Growth and innovation - Every 6th sprint is reserved for planning and innovation. So, with regular intervals you have a chance to explore and learn new skills or improve something that you believe will be beneficial to you, your team or the application. Self-Direction - High degree of self-organization. Each team and developer have a high degree of freedom to plan, organize and design their work. Inclusive and diverse company culture Work-life balance – We believe that an equilibrium between professional responsibilities makes us all the best version of ourselves, both in private life and as colleagues in the workplace Empowerment – We believe that all voices are valuable and must be heard. You will be involved in shaping our work processes NEXT STEPS Please send us your application in English via our career site as soon as possible, we process incoming applications continually. Please note that only applications sent through our system will be processed. At SimCorp, we recognize that bias can unintentionally occur in the recruitment process. To uphold fairness and equal opportunities for all applicants, we kindly ask you to exclude personal data such as photo, age, or any non-professional information from your application. Thank you for aiding us in our endeavor to mitigate biases in our recruitment process. If you are interested in being a part of SimCorp but are not sure this role is suitable, submit your CV anyway. SimCorp is on an exciting growth journey, and our Talent Acquisition Team is ready to assist you discover the right role for you. The approximate time to consider your CV is three weeks We are eager to continually improve our talent acquisition process and make everyone’s experience positive and valuable. Therefore, during the process we will ask you to provide your feedback, which is highly appreciated. Who We Are For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. SimCorp is an independent subsidiary of the Deutsche Börse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. SimCorp is an equal opportunity employer and welcome applicants from all backgrounds, without regard to race, gender, age, disability, or any other protected status under applicable law. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients.
Posted 3 days ago
6.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Title: Salesforce Developer – Data cloud Location: Chennai, India Company: Altimetrik Experience: 3 – 6 years Mode: Full-time Work Model: Hybrid Role Overview We are seeking a Salesforce Data Cloud professional with strong experience in data modeling and data integration. This role will be responsible for designing and implementing Data Cloud solutions that unify customer data from multiple systems, creating a single source of truth to drive personalization and engagement. The ideal candidate should combine expertise in Salesforce Data Cloud (CDP) with advanced knowledge of data modeling, ETL, and integrations. Key Responsibilities Design, configure, and implement Salesforce Data Cloud solutions including: Data Streams for ingestion. Data Model Objects (DMOs) and relationships to support scalable customer data models. Identity Resolution for unified customer profiles. Calculated Insights, Segments, and Activations for analytics and personalization. Define and optimize data models, schema designs, and relationships within Salesforce Data Cloud to support business use cases. Integrate Data Cloud with Salesforce Marketing Cloud, Sales Cloud, and Service Cloud for end-to-end personalization. Collaborate with business teams to translate requirements into data models and architecture. Implement ETL, API, and middleware (e.g., MuleSoft) integrations for large-scale data ingestion. Apply data governance, privacy, and compliance best practices. Provide technical leadership in data modeling, ensuring scalability and performance. Document solutions and provide training to stakeholders. Required Skills & Qualifications 5–9 years of Salesforce ecosystem experience with at least 2+ years in Salesforce Data Cloud/CDP. Strong expertise in data modeling, schema design, entity relationships, and normalization/denormalization techniques. Hands-on experience with Data Streams, DMOs, Identity Resolution, Calculated Insights, Segments, Activations. Proficiency in SQL, data transformation, and performance optimization. Experience with Salesforce integrations (APIs, MuleSoft, ETL tools, or connectors). Good understanding of Marketing Cloud, Service Cloud, or Sales Cloud. Salesforce Certifications (preferred): Data Cloud Consultant Data Architect Platform Developer I/II Good to Have Exposure to Einstein AI/Analytics with Data Cloud. Experience with Snowflake, AWS, or GCP data pipelines. Knowledge of Customer 360 Audiences and advanced segmentation strategies. Prior experience working in Agile/Scrum environments.
Posted 4 days ago
0 years
0 Lacs
andhra pradesh
On-site
Design and develop conceptual, logical, and physical data models for enterprise and application-level databases. Translate business requirements into well-structured data models that support analytics, reporting, and operational systems. Define and maintain data standards, naming conventions, and metadata for consistency across systems. Collaborate with data architects, engineers, and analysts to implement models into databases and data warehouses/lakes. Analyze existing data systems and provide recommendations for optimization, refactoring, and improvements. Create entity relationship diagrams (ERDs) and data flow diagrams to document data structures and relationships. Support data governance initiatives including data lineage, quality, and cataloging. Review and validate data models with business and technical stakeholders. Provide guidance on normalization, denormalization, and performance tuning of database designs. Ensure models comply with organizational data policies, security, and regulatory requirements. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 6 days ago
0 years
0 Lacs
andhra pradesh, india
On-site
Design and develop conceptual, logical, and physical data models for enterprise and application-level databases. Translate business requirements into well-structured data models that support analytics, reporting, and operational systems. Define and maintain data standards, naming conventions, and metadata for consistency across systems. Collaborate with data architects, engineers, and analysts to implement models into databases and data warehouses/lakes. Analyze existing data systems and provide recommendations for optimization, refactoring, and improvements. Create entity relationship diagrams (ERDs) and data flow diagrams to document data structures and relationships. Support data governance initiatives including data lineage, quality, and cataloging. Review and validate data models with business and technical stakeholders. Provide guidance on normalization, denormalization, and performance tuning of database designs. Ensure models comply with organizational data policies, security, and regulatory requirements.
Posted 1 week ago
7.0 years
0 Lacs
hyderabad, telangana, india
On-site
Oaktree is a leader among global investment managers specializing in alternative investments, with over $200 billion in assets under management. The firm emphasizes an opportunistic, value-oriented and risk-controlled approach to investments in credit, private equity, real assets and listed equities. The firm has over 1400 employees and offices in 25 cities worldwide. We are committed to cultivating an environment that is collaborative, curious, inclusive and honors diversity of thought. Providing training and career development opportunities and emphasizing strong support for our local communities through philanthropic initiatives are essential to our culture. For additional information please visit our website at www.oaktreecapital.com. The Risk, Reporting & Analytics (RR&A) department is responsible for delivering best-in-class, technology-enabled analysis and reporting to Oaktree’s investors, both current and prospective, and to our investment professionals and organizational partners globally. The Senior Associate of Investment Data and Reporting will be well versed on the instruments and markets Oaktree participates in with an emphasis on credit. The incumbent will oversee the curation of deal data throughout the investment life cycle, working in close partnership with deal-teams, administrative agents, and middle and back-office teams. In addition, the incumbent will leverage data science to build extracts, reports, and dashboards with our Data Solutions, Technology, and reporting teams. This is a critical middle-office data role, with the incumbent serving as a hub for the communication, capture, ingestion, and downstream use of certified financial and qualitative data on our portfolio companies, transactions, and holdings. Responsibilities include: Data Validation and Exception Management Partner with investment professionals to streamline deal monitoring templates and centralize dissemination of initial deal data and amendments to downstream constituents of the data; Adhere to and enhance controls frameworks for the definition, ingestion, curation, calculation, review and ultimately use of portfolio company, transaction, and holdings-related data sets throughout the firm; Validate and enrich quantitative and qualitative deal-data throughout the investment life cycle, working closely with business partners to create and review system-generated exceptions, and rectify any errors; Centrally manage entity mastering for private companies together with Data Solutions and Technology teams Ensure accurate, complete, reliable, and investor-ready data is available for use across the reporting suite in accordance with internal SLAs; Build and maintain highest quality fixed income data (e.g., coupon, yield, spread, duration) in collaboration with data management, investment operations, and analytics teams; and Leverage proprietary and third-party software tools to streamline data capture for portfolio monitoring, loan administration, valuation, and reporting. Technology-enabled Reporting Create and maintain reporting views in our data lakes reflecting gold-source investment-related data sets, in partnership with Data Solutions, Technology, and reporting teams; Utilize Python, SQL and data visualization tools (e.g., Power BI) to manage and manipulate large data sets, and create standardized reports or dashboards; Support the implementation of process changes, automated reports and technology systems to generate standard and recurring reporting as well as dynamic performance and portfolio metrics and insights; and Leverage reporting and systems knowledge to gather and document implementation requirements. Partnership & Innovation Collaborate with organizational partners to ensure robust data and the production and advancement of RR&A deliverables, who include: Data Solutions, Investment/Portfolio Management Team, Product Specialists, Investor Relations, Marketing and business development, Accounting and Operations, IT, and Compliance; Partner with “citizen developer” teams on the strategic development and tactical execution of technology-enabled reporting activities to benefit existing and prospective investors, investment/portfolio management teams and business development and investor relations; and Identify and capture opportunities to create superior data solutions and build efficiencies through process and technological improvements. Required Experience: 7+ years of relevant technical experience developing/implementing data lake/data warehouse technologies, preferably at an asset management company, investment bank or other financial services company; Solid knowledge of alternative investments broadly as well as specific knowledge of relevant reporting/dashboard outputs and metrics including performance reporting, risk metrics and portfolio characteristics; Strong technical skills with all data warehousing and integration technologies; Excellent data modeling skills (normalization/denormalization, data warehouse schema types, dimensional), preferably related to: – portfolio-company level data (e.g., financials, quantitative and qualitative deal data), – fund-deal data (e.g., entry/exit date, issue to deal mapping, deal classifications, investment-level cash flows and performance), and – holdings and benchmark constituents (e.g., base rates, yields, spreads, credit quality, market bid/ask); Experience building dashboards and data visualizations using Tableau, Power BI or other reporting platforms; Expertise and hands on software development required in one or more languages/frameworks such as Python, Scala/Spark; Knowledge of cloud-based data services and platforms (e.g., AWS, Azure) is a plus; Track record of leading solution delivery of end-to-end data projects; Familiarity with Agile project management methodology using tools such as JIRA and Confluence; and Available to work during portions of U.S. PST working hours and India IST working hours to interface with key stakeholders. Personal Attributes A self-starter with a proven ability to operate independently and collaboratively on short- and long-term goals while maintaining the highest quality standards. Strong interpersonal and communication skills (verbal and written); excels in a consensus-oriented team environment. A natural problem solver; have the resolve to independently identify, recommend and implement improvements to enhance productivity, the stakeholder experience, and overall data and reporting platform. Outstanding attention to detail, superior organizational skills and the ability to effectively create and manage complex project plans with diversified workstreams and competing deadlines. Strong documentation skills of process, steps, methodology. Passion for improving systems and processes. Strong integrity and professionalism, and belief in Oaktree’s common goal of excellence. Education Bachelor’s degree required. Engineering, Computer Science, Finance, Information Systems or related areas of study preferred. Equal Opportunity Employment Policy Oaktree is committed to diversity and to equal opportunity employment. Oaktree does not make employment decisions on the basis of race, creed, color, ethnicity, national origin, citizenship, religion, sex, sexual orientation, gender identity, gender expression, age, past or present physical or mental disability, HIV status, medical condition as defined by state law (genetic characteristics or cancer), pregnancy, childbirth and related medical conditions, veteran status, military service, marital status, familial status, genetic information, domestic violence victim status or any other classification protected by applicable federal, state and local laws and ordinances. This policy applies to hiring, placement, internal promotions, training, opportunities for advancement, recruitment advertising, transfers, demotions, layoffs, terminations, recruitment advertising, rates of pay and other forms of compensation and all other terms, conditions and privileges of employment. This policy applies to all Oaktree applicants, employees, clients, and contractors. Staff members wishing to report violations or suspected violations of this policy should contact the head of their department or Human Resources. For positions based in Los Angeles For those applying for a position in the city of Los Angeles, the firm will consider for employment qualified applicants with a criminal history in a manner consistent with applicable federal, state and local law.
Posted 1 week ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Oaktree is a leader among global investment managers specializing in alternative investments, with over $200 billion in assets under management. The firm emphasizes an opportunistic, value-oriented and risk-controlled approach to investments in credit, private equity, real assets and listed equities. The firm has over 1400 employees and offices in 25 cities worldwide. We are committed to cultivating an environment that is collaborative, curious, inclusive and honors diversity of thought. Providing training and career development opportunities and emphasizing strong support for our local communities through philanthropic initiatives are essential to our culture. For additional information please visit our website at www.oaktreecapital.com. The Risk, Reporting & Analytics (RR&A) department is responsible for delivering best-in-class, technology-enabled analysis and reporting to Oaktree’s investors, both current and prospective, and to our investment professionals and organizational partners globally. The Senior Associate of Investment Data and Reporting will be well versed on the instruments and markets Oaktree participates in with an emphasis on credit. The incumbent will oversee the curation of deal data throughout the investment life cycle, working in close partnership with deal-teams, administrative agents, and middle and back-office teams. In addition, the incumbent will leverage data science to build extracts, reports, and dashboards with our Data Solutions, Technology, and reporting teams. This is a critical middle-office data role, with the incumbent serving as a hub for the communication, capture, ingestion, and downstream use of certified financial and qualitative data on our portfolio companies, transactions, and holdings. Responsibilities include: Data Validation and Exception Management Partner with investment professionals to streamline deal monitoring templates and centralize dissemination of initial deal data and amendments to downstream constituents of the data; Adhere to and enhance controls frameworks for the definition, ingestion, curation, calculation, review and ultimately use of portfolio company, transaction, and holdings-related data sets throughout the firm; Validate and enrich quantitative and qualitative deal-data throughout the investment life cycle, working closely with business partners to create and review system-generated exceptions, and rectify any errors; Centrally manage entity mastering for private companies together with Data Solutions and Technology teams Ensure accurate, complete, reliable, and investor-ready data is available for use across the reporting suite in accordance with internal SLAs; Build and maintain highest quality fixed income data (e.g., coupon, yield, spread, duration) in collaboration with data management, investment operations, and analytics teams; and Leverage proprietary and third-party software tools to streamline data capture for portfolio monitoring, loan administration, valuation, and reporting. Technology-enabled Reporting Create and maintain reporting views in our data lakes reflecting gold-source investment-related data sets, in partnership with Data Solutions, Technology, and reporting teams; Utilize Python, SQL and data visualization tools (e.g., Power BI) to manage and manipulate large data sets, and create standardized reports or dashboards; Support the implementation of process changes, automated reports and technology systems to generate standard and recurring reporting as well as dynamic performance and portfolio metrics and insights; and Leverage reporting and systems knowledge to gather and document implementation requirements. Partnership & Innovation Collaborate with organizational partners to ensure robust data and the production and advancement of RR&A deliverables, who include: Data Solutions, Investment/Portfolio Management Team, Product Specialists, Investor Relations, Marketing and business development, Accounting and Operations, IT, and Compliance; Partner with “citizen developer” teams on the strategic development and tactical execution of technology-enabled reporting activities to benefit existing and prospective investors, investment/portfolio management teams and business development and investor relations; and Identify and capture opportunities to create superior data solutions and build efficiencies through process and technological improvements. Required Experience: 5+ years of relevant technical experience developing/implementing data lake/data warehouse technologies, preferably at an asset management company, investment bank or other financial services company; Solid knowledge of alternative investments broadly as well as specific knowledge of relevant reporting/dashboard outputs and metrics including performance reporting, risk metrics and portfolio characteristics; Strong technical skills with all data warehousing and integration technologies; Excellent data modeling skills (normalization/denormalization, data warehouse schema types, dimensional), preferably related to: – portfolio-company level data (e.g., financials, quantitative and qualitative deal data), – fund-deal data (e.g., entry/exit date, issue to deal mapping, deal classifications, investment-level cash flows and performance), and – holdings and benchmark constituents (e.g., base rates, yields, spreads, credit quality, market bid/ask); Experience building dashboards and data visualizations using Tableau, Power BI or other reporting platforms; Expertise and hands on software development required in one or more languages/frameworks such as Python, Scala/Spark; Knowledge of cloud-based data services and platforms (e.g., AWS, Azure) is a plus; Track record of leading solution delivery of end-to-end data projects; Familiarity with Agile project management methodology using tools such as JIRA and Confluence; and Available to work during portions of U.S. PST working hours and India IST working hours to interface with key stakeholders. Personal Attributes A self-starter with a proven ability to operate independently and collaboratively on short- and long-term goals while maintaining the highest quality standards; Strong interpersonal and communication skills (verbal and written); excels in a consensus-oriented team environment; A natural problem solver; have the resolve to independently identify, recommend and implement improvements to enhance productivity, the stakeholder experience, and overall data and reporting platform; Outstanding attention to detail, superior organizational skills and the ability to effectively create and manage complex project plans with diversified workstreams and competing deadlines; Strong documentation skills of process, steps, methodology; Passion for improving systems and processes; and Strong integrity and professionalism, and belief in Oaktree’s common goal of excellence. Education Bachelor’s degree required. Engineering, Computer Science, Finance, Information Systems or related areas of study preferred. Equal Opportunity Employment Policy Oaktree is committed to diversity and to equal opportunity employment. Oaktree does not make employment decisions on the basis of race, creed, color, ethnicity, national origin, citizenship, religion, sex, sexual orientation, gender identity, gender expression, age, past or present physical or mental disability, HIV status, medical condition as defined by state law (genetic characteristics or cancer), pregnancy, childbirth and related medical conditions, veteran status, military service, marital status, familial status, genetic information, domestic violence victim status or any other classification protected by applicable federal, state and local laws and ordinances. This policy applies to hiring, placement, internal promotions, training, opportunities for advancement, recruitment advertising, transfers, demotions, layoffs, terminations, recruitment advertising, rates of pay and other forms of compensation and all other terms, conditions and privileges of employment. This policy applies to all Oaktree applicants, employees, clients, and contractors. Staff members wishing to report violations or suspected violations of this policy should contact the head of their department or Human Resources. For positions based in Los Angeles For those applying for a position in the city of Los Angeles, the firm will consider for employment qualified applicants with a criminal history in a manner consistent with applicable federal, state and local law.
Posted 1 week ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Role Description Role Proficiency: Resolve L1 Incident and service requests within agreed SLA Outcomes 1) Monitor customer infrastructure using tools or defined SOPs to identify failures and mitigate the same by raising tickets with defined priority and severity2) Update SOP with updated troubleshooting instructions and process changes3) Mentor new team members in understanding customer infrastructure and processes4) Perform analysis for driving incident reduction5) Resolve L1 incidents and service requests Measures Of Outcomes 1) SLA Adherence2) Compliance towards runbook based troubleshooting process3) Time bound elevations and routing of tickets – OLA Adherence4) Schedule Adherence in managing ticket backlogs5) # of NCs in internal/external audits6) Number of KB changes suggested7) Production readiness of new joiners within agreed timeline by one-on-one mentorship8) % Completion of all mandatory training requirements9) Number of tickets reduced by analysis 10) Number of installation SR handled for endpoints / change tasks completed for infrastructure 11) Number of L1 tickets closed Monitoring Outputs Expected: Understand Priority and Severity based on ITIL practice. Understand agreed SLA with customer and adhere. Repetitive analysis for finding high ticket generating Cis. Adhere to ITIL best practices Runbook Reference/Change Follow runbook for troubleshooting record troubleshooting steps and provide inputs for runbook changes. Escalation/Elevation/Routing Of Tickets Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA route the tickets to relevant queue initiate intimation respective teams/customer based on defiled process. Tickets Backlog/Resolution Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tasks for infrastructure. Collaboration Collaborate with different towers of delivery for ticket resolution (within SLA) document learnings for self-reference. Close/resole L1 tickets with help from respective tower. Actively participate in team/organization-wide initiatives. Installation Install software software/tools and patches Stakeholder Management Lead the customer and vendor calls. Organize meetings with different stake holders. Participate in RCA meetings. Process Adherence Thorough understanding of organization and customer defined process. Consult with mentor when in doubt. Adherence to defined processes. Adhere to organization’ s policies and business conduct. Training On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one-on-one mentorship for new joiners. Performance Management Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals and provide feedback for mentees. Assist new team members to understand the customer environment. Skill Examples 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers2) Networking:a. Good in Monitoring tools and Device back up schedulingb. Basic DHCP and DNS configuration in routers and switchesc. Basic troubleshooting skills in ‘show ip route’ ‘sh mac address-table’ etcd. Static and dynamic IP routing protocols basics3) Server:a. Basic to intermediate powershell / BASH/Python scripting skillsb. Manual patch of QA serverc. Analyse space s from a server and engage Capacity Mgmt. team for disc expansion4) Storage and Back upa. Ability to handle Storage and Backup issues independentlyb. Ability to handle Vendor management Device management Storage array managementc. Perform Hardware upgrades firmware upgrades Vulnerability remediationd. Ticket analysis Storage and backup Performance management various trouble shootings5) Database:a. Patching and upgrading the DB server and application toolsb. Tweak queries making them run as fast as possiblec. Logical and Physical Schema design (indexing constraints partitioning etc.)d. Ability to visualize debug the end-to-end flow of business transaction model and applicationse. DB migration export/import Knowledge Examples Fair understanding of customer infrastructure ability to co-relate failures 2) Monitoring knowledge in infrastructure tools3) Networkinga. IP addressing and Subnetting knowledgeb. Preferably certified in Cisco's basic certification trackc. IOS upgradation knowledge and IOS patching knowledge4) Servera. Intermediate level knowledge in active directory DNS DHCP DFS IIS patch managementb. Strong knowledge in backup tools such as Veritas/Commvault/Windows backup storage concepts etcc. Strong Virtualization and basic cloud knowledged. AD group policy management group policy tools and troubleshooting GPO se. Basic AD object creation DNS concepts DHCP DFSf. Knowledge with tools like SCCM SCOM administration5) Storage and Backupa. In depth knowledge in Storage & Backup technology Storage allocation and reclamation Backup policy creation and managementb. Strong knowledge in server Network and virtualization technologies6) Toola. Knowledge in Infrastructure and application technologiesb. Understanding of monitoring concepts and processc. Understanding of key network monitoring protocols including SNMP NetFlow WMI syslog etcd. Knowledge in administration of tools like SCOM Solarwinds CA UIM Nagios ServiceNow etc7) Monitoringa. Good understanding of networking concepts and protocolsb. Knowledge in Server backup storage technologiesc. Desirable to have knowledge in SQL scriptingd. Knowledge in ITIL process8) Database:a. Knowledge of Database security9) Quality Analysisa. Exposure to FMEA audit practicesb. Exposure to technology/processes as per audit requirements.10) Working knowledge of MS Excel Word PPT Outlook etc. Additional Comments # Areas 1 Data Architecture & Modelling Designed or worked with data models for any domain data Understanding of data normalization vs. denormalization Previous experience with conceptual, logical, and physical data models Have you contributed to designing curated or semantic data layers 2 Technical Skills Comfort level in writing and optimizing SQL queries and Python script Worked with Databricks or Spark-based platforms for data processing Experience with PySpark transformations or data pipelines for Databricks Unity Catalog tables Developed or supported backend queries/models for Power BI reports Experience in building or consuming APIs endpoints Experience with Data catalog or governance tools (Collibra) Exposure to data visualization platforms (Power BI, Tableau) 3 Soft Skills and Domain Experience collaboration with cross-functional teams (engineers, architects, governance leads) Worked with metadata, lineage, or classification in your projects Exposure to cybersecurity data (logs, assets, vulnerabilities, etc.) Have you supported reporting or compliance initiatives in any domain 4 Education and experience 5+ years in data engineering and minimum 3 years experience in Databricks( notebook,unity catalog), Pyspark/Spark & Mysql Bachelors/Masters degree in engineering, Computer science or related background 5 Candidates Availability Candidate is available to join within 0-15 days Candidate is available to join within 15-30 days Flexibility to work from Hyderabad location Good to Have 6 Experience & Knowledge Certifications in Databricks, or cloud platforms Broad knowledge of cybersecurity principles and disciplines Knowledge of ML pipelines or advanced analytics using Databricks. Skills Databricks,Pyspark,Etl,Mysql
Posted 1 week ago
5.0 years
0 Lacs
gurgaon, haryana, india
On-site
Associate Manager / Manager Exp - 5 - 13 years Location- Bangalore, Chennai, Pune, Kolkata, Gurgaon JD for Data Modeller ================= Having 5+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional models. Good knowledge and experience in modelling complex scenario’s like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). Good communication skills.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
You should have 8-10+ years of experience in MySQL, SQL, and ORACLE. Your responsibilities will include designing and implementing databases and data modeling, creating indexes, views, complex triggers, stored procedures, functions, and store procedures for efficient data manipulation and consistency. You will be designing databases, writing sequences, jobs, complex queries in SQL, MySQL, and ORACLE. It is essential to have a strong understanding of Keys, Constraints, Indexes, Joins, CTEs, Partitioning, Row Number() and Windows function, Temporary Tables, UDTs, Types of Union, Materialized views, etc. Your role will involve troubleshooting, optimizing, and tuning SQL processes and complex queries, writing unit test cases, and understanding database transactions and states. You will also design, implement, and monitor queries and stored procedures for performance, debug programs, and integrate applications with third-party web services. Identifying opportunities for improved performance in SQL operations and implementations is crucial. You should have knowledge of database Backup, Restore & Maintenance, as well as working in development, testing, UAT, and production environments and their databases. Experience in SSRS and SSIS is required, and experience in SSAS will be a plus. Good communication skills are essential for this role. Key Skills: Performance tuning, database transactions, SSRS, SQL query optimization, ORACLE, de-normalization, triggers, backup and restore, SSAS, views, database, indexes, functions, stored procedures, database design, data modeling, unit testing, MySQL, database normalization, SSIS, SQL,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra, india
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What Does a great Data Architecture do at Fiserv? We are seeking a seasoned Data Architect with extensive experience in data modeling and architecting data solutions, particularly with Snowflake. The ideal candidate will have 8-12 years of hands-on experience in designing, implementing, and optimizing data architectures to meet the evolving needs of our organization. As a Data Architect, you will play a pivotal role in ensuring the robustness, scalability, and efficiency of our data systems. What You Will Do Data Architecture Design: Develop, optimize, and oversee conceptual and logical data systems, ensuring they meet both current and future business requirements. Data Modeling: Create and maintain data models using Snowflake, ensuring data integrity, performance, and security. Solution Architecture: Design and implement end-to-end data solutions, including data ingestion, transformation, storage, and access. Stakeholder Collaboration: Work closely with business stakeholders, data scientists, and engineers to understand data requirements and translate them into technical specifications. Performance Optimization: Monitor and improve data system performance, addressing any issues related to scalability, efficiency, and data quality. Governance and Compliance: Ensure data architectures comply with data governance policies, standards, and industry regulations. Technology Evaluation: Stay current with emerging data technologies and assess their potential impact and value to the organization. Mentorship and Leadership: Provide technical guidance and mentorship to junior data architects and engineers, fostering a culture of continuous learning and improvement. What You Will Need To Have 8-12 Years of Experience in data architecture and data modeling in Snowflake. Proficiency in Snowflake data warehousing platform. Strong understanding of data modeling concepts, including normalization, denormalization, star schema, and snowflake schema. Experience with ETL/ELT processes and tools. Familiarity with data governance and data security best practices. Knowledge of SQL and performance tuning for large-scale data systems. Experience with cloud platforms (AWS, Azure, or GCP) and related data services. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to translate technical concepts for non-technical stakeholders. Demonstrated ability to lead and mentor technical teams. What Would Be Nice To Have Education: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Certifications: Snowflake certifications or other relevant industry certifications. Industry Experience: Experience in Finance/Cards/Payments industry Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 week ago
0 years
0 Lacs
chennai
On-site
Location: Chennai, Tamil Nadu Salary Range: Description: About the Company Ignitho Inc. is a leading AI and data engineering company with a global presence, including offices in the US, UK, India, and Costa Rica. Visit our website to Ignitho is a portfolio company of Nuivio Ventures Inc., a venture builder dedicated to developing Enterprise AI product companies across various domains, including AI, Data Engineering, and IoT. Job Summary The Senior Data Architect will lead the design and implementation of scalable, high-performance data architectures across both on-premise and cloud environments. This role involves advanced data modeling, SQL development, and ETL design, with a strong focus on data quality, integrity, and strategy. The architect will collaborate closely with cross-functional teams to align technical solutions with business needs, perform in-depth data analysis, and drive the development of robust data models that support long-term organizational goals. Core Competencies Data Architecture & Strategy Advanced Data Modeling (Star, Snowflake ) SQL Development (Partitioning, Stored Procedures, Recursive Queries) Data Profiling & Root Cause Analysis ETL Design & Implementation Cloud & On-Premise Data Platforms (Oracle, Databricks ) Data Quality & Governance Stakeholder Engagement Change Data Capture & Audit Strategy Cross-Functional Collaboration Roles and Responsibilities Analyze existing data sources to understand data flow, relationships, and usage. Design and implement scalable data models following best practices in normalization, denormalization, and dimensional modeling. Develop and optimize complex SQL queries, including recursive SQLs and macros, to extract, transform, and analyze data. Architect end-to-end data strategies aligned with future-state objectives, focusing on performance, scalability, and flexibility. Reverse engineer data models through data profiling, identifying key attributes and relationships. Collaborate with business and technical stakeholders to map data patterns to underlying processes and use cases. Conduct root cause analysis (RCA) to troubleshoot data inconsistencies and propose effective resolutions. Perform frequency distribution and statistical analyses to detect data trends and quality issues. Design and implement change data capture (CDC), audit logging, and reference data strategies. Define and create mapping tables and helper tables to support flexible and configurable ETL processes. Own and ensure data integrity, accuracy, and completeness across platforms. General Attributes Demonstrates intellectual curiosity and passion for uncovering insights in complex datasets. Skilled communicator capable of translating technical concepts into actionable insights for diverse stakeholders. Engages effectively with clinicians, researchers, data scientists, and IT teams to gather requirements and align objectives. Leads deep-dive data analysis initiatives to support data-driven decision-making. Collaborates across departments to deliver robust, business-aligned data models and solutions. Preferred Skills Familiarity with data visualization tools (e.g., Tableau, Power BI) is an advantage. Experience in the Life Sciences or Pharmaceutical industry is highly desirable.
Posted 1 week ago
4.0 years
0 Lacs
pune, maharashtra, india
On-site
Pune Job Location 4+ Years Experience Graduate Qualification 15 July, 2025 Job Posted On Job Description We are seeking a skilled Senior SQL Developer with over 4 years of experience in database design, development, and support. The ideal candidate will have hands-on expertise working with MS SQL Server 2008 and above, capable of managing large-scale databases and optimizing database performance in production and development environments. Responsibilities Design, develop, and maintain relational databases ensuring data integrity through normalization, denormalization, and referential integrity using Primary Keys, Foreign Keys, and Triggers. Write and optimize complex Transact-SQL (T-SQL) queries, stored procedures, functions, views, and triggers. Implement business logic and workflows using advanced database objects and indexing strategies. Develop, deploy, and maintain SSIS packages for ETL processes, data transformation, and migration workloads. Perform query tuning, index tuning, and overall database performance optimization using DMVs, execution plans, and best practices. Utilize SQL Server Management Studio (SSMS), SSIS, and SSRS for database development, reporting, and integration tasks. Manage data migration and transformation using Bulk Copy Program (BCP), Data Transformation Services (DTS), and conversion from legacy systems. Work closely with .NET developers, providing guidance on SQL query best practices and troubleshooting. Handle large-scale user databases, applying partitioning and other techniques for scalability and performance. Create and maintain technical documentation and assist in presentations and knowledge sharing within the team. Skills 4+ years of experience with MS SQL Server 2008 and above. Strong knowledge of database design principles including normalization, denormalization, and referential integrity. Expertise in T-SQL (DDL, DML, DCL), transaction isolation levels, and advanced query writing including window functions. Experience with XML data types and indexing strategies. Hands-on experience designing and deploying SSIS packages and working with SSRS. Proven skills in query tuning, index tuning, and database performance optimization. Familiarity with ETL processes and data migration tools like BCP and DTS. Experience managing large databases and applying horizontal partitioning. Excellent analytical, problem-solving, communication, and interpersonal skills. Ability to work collaboratively in a team environment and guide junior developers. Please Fill Up the form to apply Full Name * Email Address * Phone *0 / 10 Current Location * Experience *Experience0-11-22-44-6More than 6 I Can Join Within *1 Week2 Weeks3 WeeksMore than 3 Weeks Current/Last CTC * Expected CTC * CV/Resume * No file chosen Please do not fill in this field.
Posted 1 week ago
0.0 years
0 Lacs
chennai, tamil nadu
On-site
Location: Chennai, Tamil Nadu Salary Range: Description: About the Company Ignitho Inc. is a leading AI and data engineering company with a global presence, including offices in the US, UK, India, and Costa Rica. Visit our website to Ignitho is a portfolio company of Nuivio Ventures Inc., a venture builder dedicated to developing Enterprise AI product companies across various domains, including AI, Data Engineering, and IoT. Job Summary The Senior Data Architect will lead the design and implementation of scalable, high-performance data architectures across both on-premise and cloud environments. This role involves advanced data modeling, SQL development, and ETL design, with a strong focus on data quality, integrity, and strategy. The architect will collaborate closely with cross-functional teams to align technical solutions with business needs, perform in-depth data analysis, and drive the development of robust data models that support long-term organizational goals. Core Competencies Data Architecture & Strategy Advanced Data Modeling (Star, Snowflake ) SQL Development (Partitioning, Stored Procedures, Recursive Queries) Data Profiling & Root Cause Analysis ETL Design & Implementation Cloud & On-Premise Data Platforms (Oracle, Databricks ) Data Quality & Governance Stakeholder Engagement Change Data Capture & Audit Strategy Cross-Functional Collaboration Roles and Responsibilities Analyze existing data sources to understand data flow, relationships, and usage. Design and implement scalable data models following best practices in normalization, denormalization, and dimensional modeling. Develop and optimize complex SQL queries, including recursive SQLs and macros, to extract, transform, and analyze data. Architect end-to-end data strategies aligned with future-state objectives, focusing on performance, scalability, and flexibility. Reverse engineer data models through data profiling, identifying key attributes and relationships. Collaborate with business and technical stakeholders to map data patterns to underlying processes and use cases. Conduct root cause analysis (RCA) to troubleshoot data inconsistencies and propose effective resolutions. Perform frequency distribution and statistical analyses to detect data trends and quality issues. Design and implement change data capture (CDC), audit logging, and reference data strategies. Define and create mapping tables and helper tables to support flexible and configurable ETL processes. Own and ensure data integrity, accuracy, and completeness across platforms. General Attributes Demonstrates intellectual curiosity and passion for uncovering insights in complex datasets. Skilled communicator capable of translating technical concepts into actionable insights for diverse stakeholders. Engages effectively with clinicians, researchers, data scientists, and IT teams to gather requirements and align objectives. Leads deep-dive data analysis initiatives to support data-driven decision-making. Collaborates across departments to deliver robust, business-aligned data models and solutions. Preferred Skills Familiarity with data visualization tools (e.g., Tableau, Power BI) is an advantage. Experience in the Life Sciences or Pharmaceutical industry is highly desirable.
Posted 1 week ago
10.0 years
0 Lacs
hyderabad, telangana, india
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Title : Lead Data Engineer-Solution Architecture Function/Department : Technology Location : Hyderabad/Bangalore/Bhubaneswar Employment Type : Full Time Role Overview Qualifications: Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, ADF) Experience with cloud-based data architectures and platforms (e.g. ADLS, Synapse, Snowflake, Azure SQL Database) Familiarity with .NET Core and Python FastAPI or similar; hands on experience preferred. Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Familiarity with authentication and authorization methods and frameworks (e.g. OAuth 2.0). Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Skilled in asynchronous programming patterns. Familiarity with containerization and microservices frameworks, such as Docker and Kubernetes. Proficient in utilizing Azure or other cloud services, including AKS, Cosmos NoSQL, Cognitive Search, SQL Database, ADLS, App Insights, and API Management. Familiar with DevSecOps practices and CI/CD tools, including Git, Azure DevOps, and Jenkins. Familiar with Kafka or similar messaging technologies. Familiar with GIS / geospatial systems and terminology preferred. Strong analytical and problem-solving capabilities. Experienced in producing technical documentation to support system design. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Familiarity with Agile methodologies and experience working in Agile project environments, including ceremonies and tools like JIRA. Why Join Us? Be at the forefront of digital transformation in the insurance industry. Lead impactful initiatives that simplify claims processing and enhance customer satisfaction. Work alongside experienced professionals in a collaborative, innovation-driven environment. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence : At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture : Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success : As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Hybrid Work Environment, Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1 : Submit your application via the Chubb Careers Portal / Linkedin. Step 2 : Engage with our recruitment team for an initial discussion. Step 3 : Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4 : Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion , and are ready to make a difference, we invite you to be part of Chubb India’s journey . Apply Now : https://www.chubb.com
Posted 1 week ago
8.0 years
0 Lacs
pune, maharashtra, india
On-site
Area(s) of responsibility Qualifications: Bachelor’s/relevant exp - with Snowpro Certification Must Have Skills Overall – 8+ years Relevant – 5+ years Snowflake – Minimum 3+ years 8 + years of experience in Enterprise-level Data Engineering roles. Expertise in OLAP design and deep understanding of data warehousing concepts: star/snowflake schema, ETL/ELT processes, normalization/denormalization, and data modeling. Expertise in writing complex SQL queries, including joins, window functions, and optimization techniques. Expertise and hands-on experience with Snowflake: warehouses, data ingestion with snow pipe, external integrations, working all types of views & tables, UDFs, procedures, streams, tasks and serverless, data masking. Experience with Snowflake-specific features, including clustering, partitioning, and schema design best practices. Strong verbal and written communication skills for collaborating with both technical teams and business stakeholders.
Posted 1 week ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Summary: The Senior Data Architect will lead the design and implementation of scalable, high performance data architectures across both on-premises and cloud environments. This role involves advanced data modelling, SQL development, and ETL design, with a strong focus on data quality, integrity, and strategy. The architect will collaborate closely with cross-functional teams to align technical solutions with business needs, perform in depth data analysis, and drive the development of robust data models that support long-term organisational goals. Core Competencies: • Data Architecture & Strategy • Advanced Data Modelling (Star, Snowflake) • SQL Development (Partitioning, Stored Procedures, Recursive Queries) • Data Profiling & Root Cause Analysis • ETL Design & Implementation • Cloud & On-Premise Data Platforms (Oracle, Databricks) • Data Quality & Governance • Stakeholder Engagement • Change Data Capture & Audit Strategy • Cross-Functional Collaboration Roles and Responsibilities: • Analyse existing data sources to understand data flow, relationships, and usage. • Design and implement scalable data models following best practices in normalisation, denormalization, and dimensional modelling. • Develop and optimise complex SQL queries, including recursive SQLs and macros, to extract, transform, and analyse data. • Architect end-to-end data strategies aligned with future-state objectives, focusing on performance, scalability, and flexibility. • Reverse engineer data models through data profiling, identifying key attributes and relationships. • Collaborate with business and technical stakeholders to map data patterns to underlying processes and use cases. • Conduct root cause analysis (RCA) to troubleshoot data inconsistencies and propose effective resolutions. • Perform frequency distribution and statistical analyses to detect data trends and quality issues. • Design and implement change data capture (CDC), audit logging, and reference data strategies. • Define and create mapping tables and helper tables to support flexible and configurable ETL processes. • Own and ensure data integrity, accuracy, and completeness across platforms. General Attributes: • Demonstrates intellectual curiosity and passion for uncovering insights in complex datasets. • Skilled communicator capable of translating technical concepts into actionable insights for diverse stakeholders. • Engages effectively with clinicians, researchers, data scientists, and IT teams to gather requirements and align objectives. • Leads deep-dive data analysis initiatives to support data-driven decision making. • Collaborates across departments to deliver robust, business-aligned data models and solutions. Preferred Skills • Familiarity with data visualisation tools (e.g., Tableau, Power BI) is an advantage. • Experience in the Life Sciences or Pharmaceutical industry is highly desirable.
Posted 2 weeks ago
3.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionises customer engagement by transforming contact centres into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organisations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Competencies: Data Modelling: Skilled in designing data warehouse schemas (e.g., star and snowflake schemas), with experience in fact and dimension tables, as well as normalisation and denormalization techniques. Data Warehousing & Storage Solutions : Proficient with platforms such as Snowflake, Amazon Redshift, Google BigQuery, and Azure Synapse Analytics. ETL/ELT Processes : Expertise in ETL/ELT tools (e.g., Apache NiFi, Apache Airflow, Informatica, Talend, and dbt) to facilitate data movement from source systems to the data warehouse. SQL Proficiency : Advanced SQL skills for complex queries, indexing, and performance tuning. Programming Skills : Strong in Python or Java for building custom data pipelines and handling advanced data transformations. Data Integration : Experience with real-time data integration tools like Apache Kafka, Apache Spark, AWS Glue, Fivetran, and Stitch. Data Pipeline Management : Familiar with workflow automation tools (e.g., Apache Airflow, Luigi) to orchestrate and monitor data pipelines. APIs and Data Feeds : Knowledgeable in API-based integrations, especially for aggregating data from distributed sources. Responsibilities – Design and implement analytical platforms that provide insightful dashboards to customers. Develop and maintain data warehouse schemas, such as star schemas, fact tables, and dimensions, to support efficient querying and data access. Oversee data propagation processes from source databases to warehouse-specific databases/tools, ensuring data accuracy, reliability, and timeliness. Ensure the architectural design is extensible and scalable to adapt to future needs. Requirement - Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 Engineering institutes with relevant work experience with a top technology company. 3+ years of Backend and Infrastructure Experience with a strong track record in development, architecture and design. Hands-on experience with large-scale databases, high-scale messaging systems and real-time Job Queues. Experience navigating and understanding large-scale systems and complex code-bases, and architectural patterns. Proven experience in building high-scale data platforms. Strong expertise in data warehouse schema design (star schema, fact tables, dimensions). Experience with data movement, transformation, and integration tools for data propagation across systems. Ability to evaluate and implement best practices in data architecture for scalable solutions. Nice to have: Experience with Google Cloud, Django, Postgres, Celery, Redis. Some experience with AI Infrastructure and Operations. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Our AI platform : https://www.youtube.com/watch?v=g06q2V_kb-s
Posted 2 weeks ago
5.0 years
0 Lacs
ghaziabad, uttar pradesh, india
On-site
Job Title/Designation: Team Lead- Database Core SQL Skills: (Ms Sql) 5+ year :(Must have skills) Advanced querying (complex joins, subqueries, CTEs, window functions). Stored procedure, function, and trigger development. Error handling and transaction management. Dynamic SQL. Working with various data types (including XML, JSON, and spatial data). Data Manipulation Language (DML): Efficient INSERT, UPDATE, DELETE, and MERGE statements. Understanding of data integrity constraints. Data Definition Language (DDL): Creating and modifying tables, views, indexes, and other database objects. Understanding of database schemas and normalization. Database Design and Architecture: Relational Database Design: Normalization and denormalization principles. Entity-relationship modeling (ERD). Data warehousing and OLAP concepts (if applicable). Database Object Design: Designing efficient and maintainable stored procedures, functions, and triggers. Understanding of indexing strategies. Implementing database security. Database Architecture: Understanding of the different SQL server editions and their capabilities. Understanding of high availability and disaster recovery concepts. Performance Optimization: Query Optimization: Analyzing query execution plans. Identifying and resolving performance bottlenecks. Index tuning and management. Understanding of statistics and their impact on query performance. Database Performance Tuning: Monitoring and optimizing database server performance. Understanding of SQL Server Profiler and Extended Events. Understanding of resource management. SQL Server Agent's Role: Automation: SQL Server Agent is the job scheduling and alerting service in SQL Server. It enables developers to automate routine database tasks, reducing manual effort and ensuring consistency. Key Functions: Scheduling jobs (e.g., backups, index maintenance, data imports). Monitoring database events and sending alerts. Executing T-SQL scripts, operating system commands, and other tasks. Skills Related to SQL Server Agent: Job Creation and Management: Ability to create and configure SQL Server Agent jobs. Defining job steps, schedules, and notifications. Understanding of different job step types (T-SQL, operating system command, PowerShell). Scheduling: Proficiency in creating flexible job schedules (e.g., daily, weekly, monthly, recurring). Alerting and Notifications: Setting up alerts to notify administrators of database events (e.g., errors, performance issues). Configuring notifications (e.g., email, pager) for job status and alerts. Troubleshooting: Analyzing job history and logs to troubleshoot job failures. Identifying and resolving issues related to job scheduling and execution. Soft Skills: Problem-solving: Ability to analyze and resolve complex database issues. Communication: Clear and effective communication with developers, analysts, and other stakeholders. Teamwork: Ability to work effectively in a team environment. Attention to detail: Meticulous attention to detail in database design and development. Continuous learning: Staying up-to-date with the latest SQL Server features and technologies. When considering SQL Server Agent in the context of a SQL Server database developer's skills, it's crucial to understand how it's used for automating database tasks. Here's how it fits into the skillset: Good to have skills - PostgreSQL,MySQL ,NoSQL ,SSIS ETL Tool
Posted 3 weeks ago
6.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Job Description & Summary: We are looking for a skilled Azure Cloud Data Engineer with strong expertise in Python programming , Databricks , and advanced SQL to join our team in Noida . The candidate will be responsible for designing, developing, and optimizing scalable data solutions on the Azure cloud platform. You will play a critical role in building data pipelines and transforming complex data into actionable insights by leveraging cloud-native tools and technologies. Level: Senior Consultant / Manager Location: Noida LOS: Competency: Data & Analytics Skill: Azure Data Engineering Job Position Title: Azure Cloud Data Engineer with Python Programming – Senior Consultant/Manager (6+ Years) Responsibilities: Design, develop, and manage scalable and secure data pipelines using Azure Databricks and Azure Data Factory. Write clean, efficient, and reusable code primarily in Python for cloud automation, data processing, and orchestration. Architect and implement cloud-based data solutions, integrating structured and unstructured data sources. Build and optimize ETL workflows and ensure seamless data integration across platforms. Develop data models using normalization and denormalization techniques to support OLTP and OLAP systems. Manage Azure-based storage solutions including Azure Data Lake and Blob Storage. Troubleshoot performance bottlenecks in data flows and ETL processes. Integrate advanced analytics and support BI use cases within the Azure ecosystem. Lead code reviews and ensure adherence to version control practices (e.g., Git). Contribute to the design and deployment of enterprise-level data warehousing solutions. Stay current with Azure cloud technologies and Python ecosystem updates to adopt best practices and emerging tools. Mandatory skill sets: Strong Python programming skills (Must-Have) – advanced scripting, automation, and cloud SDK experience Strong SQL skills (Must-Have) Azure Databricks (Must-Have) Azure Data Factory Azure Blob Storage / Azure Data Lake Storage Apache Spark (hands-on experience) Data modeling (Normalization & Denormalization) Data warehousing and BI tools integration Git (Version Control) Building scalable ETL pipelines Preferred skill sets (Good to Have): Understanding of OLTP and OLAP environments Experience with Kafka and Hadoop Azure Synapse Analytics Azure DevOps for CI/CD integration Agile delivery methodologies Years of experience required: 6+ years of overall experience in cloud engineering or data engineering roles, with at least 2-3 years of hands-on experience with Azure cloud services. Proven track record of strong Python development with at least 2-3 years of hands-on experience. Education qualification: BE/B.Tech/MBA/MCA
Posted 3 weeks ago
5.0 years
45 - 55 Lacs
bhubaneswar, odisha, india
Remote
Experience : 5.00 + years Salary : INR 4500000-5500000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Medblocks) (*Note: This is a requirement for one of Uplers' client - Medblocks) What do you need for this opportunity? Must have skills required: Kafka, Snowflake, CI/CD Pipeline, ETL/ELT pipelines, PostgreSQL, Backend Medblocks is Looking for: As a Principal Engineer at Medblocks, you will be the primary architect of our data infrastructure, designing and implementing the foundation that powers our entire healthcare platform. This is a high-ownership role where you’ll make critical decisions about data architecture, security, and performance that will scale with us for years to come. Key responsibilities include: Architecting and implementing complex PostgreSQL database schemas that handle millions of healthcare records with sub-second query performance Writing and optimizing advanced PostgreSQL features including RLS (Row-Level Security) policies, stored procedures, triggers, and custom functions Building reliable, real-time data pipelines using CDC (Change Data Capture) and event-driven architectures to sync data across multiple systems Designing and implementing database performance monitoring, query optimization, and debugging strategies Creating data integration patterns that map between our core PostgreSQL database and external systems (data warehouses, analytics platforms, third-party APIs) Establishing database CI/CD practices including migration strategies, testing frameworks, and zero-downtime deployment patterns Mentoring team members on database best practices and reviewing critical database design decisions Documenting complex data models and creating technical specifications that other engineers can follow Qualities We Value: Deep technical curiosity: You read PostgreSQL release notes for fun and have opinions about indexing strategies Systems thinking: You understand how database decisions impact the entire application stack Ownership mentality: You’ve been the person others turn to when the database is on fire at 2 AM Teaching ability: You can explain why denormalization might be the right choice to both junior engineers and senior architects Pragmatism: You know when to use triggers vs application logic, when to normalize vs denormalize Requirements: 5+ years of production database engineering experience, with at least 3 years focused on PostgreSQL Proven experience designing complex relational data models (50+ tables) that have scaled in production Deep PostgreSQL expertise including: Advanced SQL (CTEs, window functions, recursive queries) PL/pgSQL programming for procedures and triggers Performance tuning (query plans, indexing strategies, vacuum configuration) Security implementations (RLS, role-based access) Experience building data pipelines and ETL/ELT processes at scale Hands-on experience with data integration patterns (CDC, event streaming, batch processing) Strong DevOps skills including: Database migration tools and strategies CI/CD pipeline implementation Infrastructure as Code (Terraform, Ansible) Containerization and orchestration Experience with monitoring and debugging production database issues Track record of working in high-autonomy environments where you owned technical decisions Nice to haves: Experience with data warehouse technologies (Snowflake, BigQuery, Redshift, Apache Iceberg) Knowledge of event streaming platforms (Kafka, Debezium, Apache Pulsar) Familiarity with dbt, Apache Airflow, or similar data orchestration tools Experience with time-series data or audit log patterns Contributions to PostgreSQL extensions or database-related open source projects Experience with multi-tenant database architectures Background in regulated industries requiring data compliance Interview Process Introductory Call Tech Round: Previous Experience + Case Study/Problem Solving How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
5.0 years
45 - 55 Lacs
cuttack, odisha, india
Remote
Experience : 5.00 + years Salary : INR 4500000-5500000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Medblocks) (*Note: This is a requirement for one of Uplers' client - Medblocks) What do you need for this opportunity? Must have skills required: Kafka, Snowflake, CI/CD Pipeline, ETL/ELT pipelines, PostgreSQL, Backend Medblocks is Looking for: As a Principal Engineer at Medblocks, you will be the primary architect of our data infrastructure, designing and implementing the foundation that powers our entire healthcare platform. This is a high-ownership role where you’ll make critical decisions about data architecture, security, and performance that will scale with us for years to come. Key responsibilities include: Architecting and implementing complex PostgreSQL database schemas that handle millions of healthcare records with sub-second query performance Writing and optimizing advanced PostgreSQL features including RLS (Row-Level Security) policies, stored procedures, triggers, and custom functions Building reliable, real-time data pipelines using CDC (Change Data Capture) and event-driven architectures to sync data across multiple systems Designing and implementing database performance monitoring, query optimization, and debugging strategies Creating data integration patterns that map between our core PostgreSQL database and external systems (data warehouses, analytics platforms, third-party APIs) Establishing database CI/CD practices including migration strategies, testing frameworks, and zero-downtime deployment patterns Mentoring team members on database best practices and reviewing critical database design decisions Documenting complex data models and creating technical specifications that other engineers can follow Qualities We Value: Deep technical curiosity: You read PostgreSQL release notes for fun and have opinions about indexing strategies Systems thinking: You understand how database decisions impact the entire application stack Ownership mentality: You’ve been the person others turn to when the database is on fire at 2 AM Teaching ability: You can explain why denormalization might be the right choice to both junior engineers and senior architects Pragmatism: You know when to use triggers vs application logic, when to normalize vs denormalize Requirements: 5+ years of production database engineering experience, with at least 3 years focused on PostgreSQL Proven experience designing complex relational data models (50+ tables) that have scaled in production Deep PostgreSQL expertise including: Advanced SQL (CTEs, window functions, recursive queries) PL/pgSQL programming for procedures and triggers Performance tuning (query plans, indexing strategies, vacuum configuration) Security implementations (RLS, role-based access) Experience building data pipelines and ETL/ELT processes at scale Hands-on experience with data integration patterns (CDC, event streaming, batch processing) Strong DevOps skills including: Database migration tools and strategies CI/CD pipeline implementation Infrastructure as Code (Terraform, Ansible) Containerization and orchestration Experience with monitoring and debugging production database issues Track record of working in high-autonomy environments where you owned technical decisions Nice to haves: Experience with data warehouse technologies (Snowflake, BigQuery, Redshift, Apache Iceberg) Knowledge of event streaming platforms (Kafka, Debezium, Apache Pulsar) Familiarity with dbt, Apache Airflow, or similar data orchestration tools Experience with time-series data or audit log patterns Contributions to PostgreSQL extensions or database-related open source projects Experience with multi-tenant database architectures Background in regulated industries requiring data compliance Interview Process Introductory Call Tech Round: Previous Experience + Case Study/Problem Solving How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
5.0 years
45 - 55 Lacs
kolkata, west bengal, india
Remote
Experience : 5.00 + years Salary : INR 4500000-5500000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Medblocks) (*Note: This is a requirement for one of Uplers' client - Medblocks) What do you need for this opportunity? Must have skills required: Kafka, Snowflake, CI/CD Pipeline, ETL/ELT pipelines, PostgreSQL, Backend Medblocks is Looking for: As a Principal Engineer at Medblocks, you will be the primary architect of our data infrastructure, designing and implementing the foundation that powers our entire healthcare platform. This is a high-ownership role where you’ll make critical decisions about data architecture, security, and performance that will scale with us for years to come. Key responsibilities include: Architecting and implementing complex PostgreSQL database schemas that handle millions of healthcare records with sub-second query performance Writing and optimizing advanced PostgreSQL features including RLS (Row-Level Security) policies, stored procedures, triggers, and custom functions Building reliable, real-time data pipelines using CDC (Change Data Capture) and event-driven architectures to sync data across multiple systems Designing and implementing database performance monitoring, query optimization, and debugging strategies Creating data integration patterns that map between our core PostgreSQL database and external systems (data warehouses, analytics platforms, third-party APIs) Establishing database CI/CD practices including migration strategies, testing frameworks, and zero-downtime deployment patterns Mentoring team members on database best practices and reviewing critical database design decisions Documenting complex data models and creating technical specifications that other engineers can follow Qualities We Value: Deep technical curiosity: You read PostgreSQL release notes for fun and have opinions about indexing strategies Systems thinking: You understand how database decisions impact the entire application stack Ownership mentality: You’ve been the person others turn to when the database is on fire at 2 AM Teaching ability: You can explain why denormalization might be the right choice to both junior engineers and senior architects Pragmatism: You know when to use triggers vs application logic, when to normalize vs denormalize Requirements: 5+ years of production database engineering experience, with at least 3 years focused on PostgreSQL Proven experience designing complex relational data models (50+ tables) that have scaled in production Deep PostgreSQL expertise including: Advanced SQL (CTEs, window functions, recursive queries) PL/pgSQL programming for procedures and triggers Performance tuning (query plans, indexing strategies, vacuum configuration) Security implementations (RLS, role-based access) Experience building data pipelines and ETL/ELT processes at scale Hands-on experience with data integration patterns (CDC, event streaming, batch processing) Strong DevOps skills including: Database migration tools and strategies CI/CD pipeline implementation Infrastructure as Code (Terraform, Ansible) Containerization and orchestration Experience with monitoring and debugging production database issues Track record of working in high-autonomy environments where you owned technical decisions Nice to haves: Experience with data warehouse technologies (Snowflake, BigQuery, Redshift, Apache Iceberg) Knowledge of event streaming platforms (Kafka, Debezium, Apache Pulsar) Familiarity with dbt, Apache Airflow, or similar data orchestration tools Experience with time-series data or audit log patterns Contributions to PostgreSQL extensions or database-related open source projects Experience with multi-tenant database architectures Background in regulated industries requiring data compliance Interview Process Introductory Call Tech Round: Previous Experience + Case Study/Problem Solving How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |