Komhar Infotech

4 Job openings at Komhar Infotech
Senior Sql Developer hyderabad 6 - 10 years INR 5.0 - 15.0 Lacs P.A. Hybrid Full Time

Summary Highly experienced and results-driven Senior SQL Developer with 8+ years of expertise in designing, developing, and optimizing complex database solutions. Proficient in a variety of SQL dialects, including T-SQL, Oracle SQL, and GCP BigQuery . Proven ability to deliver high-performance queries, manage data warehousing projects, and collaborate with cross-functional teams. Possesses a solid understanding of the software development lifecycle, including JIRA and QA processes , and has hands-on experience with Python and Shell scripting for automation and data manipulation. Technical Skills Database: SQL Server, Oracle, Google Cloud Platform (BigQuery) Languages: SQL, T-SQL, PL/SQL, Python, Shell Scripting Tools & Platforms: JIRA, Git, Google Cloud Platform Concepts: Data Warehousing, ETL, Data Modeling, Query Performance Tuning, QA Testing Professional Experience Senior SQL Developer | Designed, developed, and maintained complex SQL queries, stored procedures, functions, and views to support business intelligence and reporting needs. Optimized database performance by tuning slow-running queries, resulting in a 40% reduction in average query execution time. Led the migration of on-premise data warehouses to GCP BigQuery , implementing best practices for data partitioning and clustering to improve query efficiency and reduce costs. Collaborated with a cross-functional team of developers, data analysts, and QA engineers using JIRA to manage project tasks and track progress. Developed Python scripts to automate daily data extraction and loading (ETL) processes, reducing manual effort by over 60%. Participated in the QA process by reviewing test plans and providing feedback on data validation and integrity. SQL Developer Developed and optimized Oracle SQL and T-SQL queries to build and maintain data marts for financial reporting. Wrote Shell scripts to schedule and monitor database backup and maintenance jobs. Assisted in data modeling and schema design for new database projects. Provided support to business users and resolved database-related issues in a timely manner. Education Bachelor of Science in Computer Science

Data Engineer(Ab-initio & GCP) hyderabad 4 - 6 years INR 5.0 - 15.0 Lacs P.A. Work from Office Full Time

Job Summary We are looking for highly skilled Data Engineers with 3 to 5 years of experience in building and managing advanced data solutions. The ideal candidate should have extensive experience with SQL, Ab-Initio, Teradata and Google Cloud Platform (GCP). Key Responsibilities • Be the part of the team towards design, development, and optimization of large-scale data pipelines, ensuring they meet business and technical requirements. • Implement data solutions using SQL, Ab-Initio, Teradata and GCP; ensuring scalability, reliability, and performance. • Mentor and guide the colleagues in the development and execution of ETL processes and data integration solutions. • Take ownership of end-to-end data workflows, from data ingestion to transformation, storage, and accessibility. • Lead performance tuning and optimization efforts for complex SQL queries and Teradata database systems. • Design and implement data governance, quality, and security best practices to ensure data integrity and compliance. • Manage the migration of legacy data systems to cloud-based solutions on Google Cloud Platform (GCP). • Ensure continuous improvement and automation of data pipelines and workflows. • Troubleshoot and resolve issues related to data quality, pipeline performance, and system integration. • Stay up-to-date with industry trends and emerging technologies to drive innovation and improve data engineering practices within the team. Required Skills • 4 to 6 years of experience in data engineering or related roles. • Strong expertise in SQL, Teradata, and Ab-Initio. • Proficient and good hands on experience in using the Ab-Initio development products - GDE, Conduct>IT, Control Center, Continuous Flows, EME, M-HUB, etc. • Hands-on experience over Unix/Shell scripting. • Proficient in scheduling and analyzing jobs using Autosys (or any similar) Scheduler. • In-depth experience with Google Cloud Platform (GCP), including tools like BigQuery, Cloud Storage, Dataflow, etc. • Proven track record of leading teams and projects related to data engineering and ETL pipeline development. • Experience with data warehousing and cloud-native storage solutions. • Strong analytical and problem-solving skills. • Experience in setting up and enforcing data governance, security, and compliance standards. Preferred (good to have) Skills • Familiarity with other cloud platforms/services (AWS, Azure). • Familiarity with any other ETL tools like Informatica, Datastage, Alteryx, etc. • Knowledge of Big Data technologies like Hadoop, Spark, Hive, Scala, etc. • Strong communication skills and the ability to collaborate effectively with both technical and non-technical teams.

AWS Generative AI Engineer hyderabad 2 - 3 years INR 4.5 - 9.5 Lacs P.A. Work from Office Full Time

Job Description: AWS Generative AI Engineer Work Location: Hyderabad Key Responsibilities Design and implement serverless applications using AWS Lambda. Develop and manage ETL pipelines using AWS Lambda/Glue. Work with Amazon S3 for data storage, organization, and lifecycle management. Build, train, and deploy machine learning models using Amazon SageMaker. Integrate and utilize Amazon Bedrock for generative AI applications. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Ensure best practices in cloud architecture, security, and cost optimization. Required Skills & Experience 3+ years of hands-on experience with AWS services, especially Lambda, Glue, S3, SageMaker, and Bedrock. Strong programming skills in Python. Experience with data engineering, ML workflows, and MLOps practices. Familiarity with CI/CD pipelines and infrastructure as code (e.g., CloudFormation, Terraform). Excellent problem-solving and communication skills. Knowledge of generative AI and Large Language Models (LLMs). Exercise Build a Python-based tool that uses a Large Language Model (LLM) to: 1. Ingest a requirement document (e.g., in plain text, Word, or PDF). 2. Generate user stories from the document. 3. Generate test cases based on those user stories. Deliverables Python code (can be in a Jupyter notebook or script format). README with setup instructions and explanation of approach. Sample output (user stories + test cases). Optional: Dockerfile or environment.yml for reproducibility. Tech Stack Python 3.10+ Claude API, OpenAI API, or Hugging Face Transformers (e.g., Claude, GPT-3.5, LLaMA, Mistral). LangChain or other LLM wrappers (optional). Advanced prompt engineering for extracting structured outputs.

Data Engineer hyderabad 8 - 11 years INR 9.5 - 19.5 Lacs P.A. Work from Office Full Time

We are looking for a Datastage Developer Lead to join our data engineering team. The ideal candidate will have strong experience in IBM Infosphere DataStage, ETL development, and leading end-to-end data integration projects. The role requires hands-on development, team leadership, and close collaboration with business and technical stakeholders. Key Responsibilities: Lead the design, development, and implementation of ETL processes using IBM DataStage. Collaborate with business analysts and data architects to understand data requirements. Optimize, maintain, and troubleshoot existing DataStage jobs and data pipelines. Ensure high-quality deliverables through code reviews, unit testing, and documentation. Mentor junior team members and provide technical leadership. Work closely with QA and DevOps teams for deployment and release management. Participate in project planning, estimation, and status reporting activities. Required Skills: Strong hands-on experience with IBM Infosphere DataStage. Solid understanding of ETL concepts, data warehousing, and data modeling. Proficient in SQL and working with large relational databases (Oracle, DB2, etc.). Experience with performance tuning and troubleshooting of ETL jobs. Familiarity with data quality, data governance, and master data management practices. Excellent communication and problem-solving skills. Preferred Skills: Experience with cloud-based ETL tools or hybrid cloud environments (e.g., AWS, Azure). Knowledge of scheduling tools like Control-M, Autosys. Exposure to Agile/Scrum methodologies. Knowledge in shell scripting.