Appstek Information Services is a technology-driven company specializing in software development, IT consulting, and technology solutions.
Not specified
INR 9.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Job Summary:We are seeking a highly skilled and experienced Senior Data Engineer to join our team in Hyderabad. This individual will be responsible for designing, building, and optimizing data infrastructure and pipelines using Azure Synapse, Pyspark, and Python. The ideal candidate will bring deep expertise in cloud platforms, data engineering best practices, and a strong ability to work with complex data workflows to deliver high-quality, scalable data solutions. Key Responsibilities:Data Infrastructure and Pipeline Development:Design, develop, and maintain complex ETL/ELT pipelines using Azure Synapse.Design, build, and maintain data pipelines and APIs in the cloud environment, with a focus on Azure cloud Platform.Optimize data pipelines for performance, scalability, and cost efficiency.Implement data governance, quality, and security best practices throughout data workflows.Experience:5 to 8 years of experience in data engineering or a related field.At least 2 years of hands-on experience with Databricks, PySpark, Azure Synapse, and cloud platforms (preferably Azure).Technical Skills:Strong expertise in Python/PySpark programming applied to data engineering.Solid experience with cloud platforms (Azure preferred, AWS/GCP) and their data services.Advanced SQL skills and familiarity with relational and NoSQL databases (e.g., MS SQL, MySQL, PostgreSQL).Strong understanding of CI/CD pipelines and automation practices in data engineering.Experience with large-scale data processing on cloud platforms.Soft Skills:Strong analytical and problem-solving skills.Excellent communication skills and the ability to work collaboratively across teams.High attention to detail with a proactive approach to improving systems and processes.
Not specified
INR 4.0 - 9.0 Lacs P.A.
Work from Office
Full Time
About the Role:We are seeking a skilled Geospatial Vision Engineer with 2+ years of experience to join our team. You will be responsible for developing and optimizing advanced vision algorithms, focusing on 3D depth estimation, geospatial analysis, multi-sensor object detection, and drone-based 3D vision. If you are passionate about AI, sensor fusion, and working with LiDAR, point cloud, hyperspectral cameras, and drone imaging, we would love to hear from you!Required Qualifications:Education:Bachelors or Masters degree in Computer Science, Electrical Engineering, Remote Sensing, or a related field.Technical Skills:Strong programming skills in Python and C++.Experience with OpenCV, TensorFlow, PyTorch, Keras, and Scikit-learn.Proficiency in deep learning architectures like CNNs, RNNs, and Vision Transformers.Hands-on experience with object detection and segmentation models such as YOLO, Faster R-CNN, and SSD.Proficiency in 3D libraries such as Open3D, PCL (Point Cloud Library), PDAL, and Meshlab.Expertise in geospatial vision, 3D depth estimation, multi-sensor fusion, and drone-based 3D vision.Experience working with LiDAR, point cloud, hyperspectral cameras, satellite imagery, and drone-mounted sensors.Familiarity with cloud computing platforms (AWS, Google Cloud).Understanding of MLOps tools (Docker, Kubernetes, ONNX, TensorRT, or OpenVINO).Soft Skills:Strong analytical and problem-solving abilities.Ability to work independently and within a team. Key Responsibilities:Develop and Optimize Geospatial Vision Algorithms:Design,implement, and optimize computer vision and deep learning models for 3D depth estimation, geospatial analysis, and 3D object detection.Implement and fine-tune object detection models for multi-sensor fusion, integrating LiDAR, hyperspectral imaging, point cloud data, and drone-based 3D vision.Machine Learning & Sensor Fusion Implementation:Train, fine-tune, and deploy deep learning models using TensorFlow, PyTorch, OpenCV, and Scikit-learn.Work with CNNs, transformers, and self-supervised learning techniques for geospatial applications.Optimize models for edge computing and real-time processing in autonomous systems and drones.Data Processing & Model Training:Experience with 3D Annotation tools for 3D data preparationProcess and augment large-scale 3D datasets including LiDAR, satellite imagery, drone imaging, and hyperspectral and depth estimation data.Optimize model performance for real-time applications on cloud and embedded platforms.Software Development & Deployment:Develop efficient Python/ C++ pipelines for real-time 3D vision and geospatial processing.Deploy AI models into production environments, drones, and autonomous systems.Collaboration & Research:Work closely with AI Team and Application product team to develop innovative solutions.
FIND ON MAP
1. Are certifications needed?
A. Certifications in cloud or data-related fields are often preferred.
2. Do they offer internships?
A. Yes, internships are available for students and recent graduates.
3. Do they support remote work?
A. Yes, hybrid and remote roles are offered depending on the project.
4. How can I get a job there?
A. Apply via careers portal, attend campus drives, or use referrals.
5. How many rounds are there in the interview?
A. Usually 2 to 3 rounds including technical and HR.
6. What is the interview process?
A. It typically includes aptitude, technical, and HR rounds.
7. What is the work culture like?
A. The company promotes flexibility, innovation, and collaboration.
8. What is their average salary for freshers?
A. Freshers earn between 3.5 to 6 LPA depending on role.
9. What kind of projects do they handle?
A. They handle digital transformation, consulting, and IT services.
10. What technologies do they work with?
A. They work with cloud, AI, cybersecurity, and digital solutions.
Gallery
Reviews
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Chrome Extension